[SQL] I am writing a MS SQL server conversion utility and am having an issue with timestamp

2005-01-14 Thread Joel Fradkin








Any one have a good idea for dealing with a timestamp where
only time is available on some of the source records?

Some records have both time and day.

My MSSQL database has 290 tables so splitting the fields
would be a very large project.

Is there a way to add just the time part of date time to
timestamp?

 

Joel Fradkin



 



Wazagua, LLC
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








[SQL] postgres stored procedures

2005-01-18 Thread Joel Fradkin
Any advice on converting my mssql to postgres?
Points of reference would be great!

All 290 tables moved over ok, some data loss, and I will look at that much
closer next time I run my conversion app.

Now Views and Stored procedures.

Joel Fradkin
 
Wazagua, LLC
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Conversion ideas (Views/procedures)

2005-01-19 Thread Joel Fradkin
I am new to Postgres and am converting our MSSQL database to Postgres.

I just did my first function.
My question is does it make sense to create functions for some of the common
functionality available to the MSSQL world (left, isnull, etc) so I do not
need visit every view, and stored procedure to remove them?
If it does is there a body of work anywhere?
If not what is the best approach just remove all non existent (re-engineer)
functions from my views and procedures. I understand some of the items are
available, but a bit different either the name of the function or its use. I
did see a how to that alerted me to the datefiff being date_diff and to
watch out for the yy needing to be 'year' etc. he mentioned to use
coalesce() instead of isnull also. Any other common ones I should be aware
of?


Joel Fradkin
 
Wazagua, LLC
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] ERROR: row is too big: size 9856, maximum size 8136

2005-01-19 Thread Joel Fradkin








I get ERROR:  row is too big: size 9856, maximum size 8136
when inserting a view?

 

Help

 

Joel Fradkin



 



Wazagua, LLC
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136

2005-01-20 Thread Joel Fradkin
I am enclosing a text file if this is not the correct manner let me know
whats best way its not a lot of lines.

ServerVersion: 07.03.0200 PostgreSQL 7.4.6 on i386-redhat-linux-gnu,
compiled by GCC i386-redhat-linux-gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2

ERROR:  row is too big: size 9856, maximum size 8136

Joel Fradkin
 
Wazagua, LLC
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 19, 2005 10:01 PM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136

On Wed, Jan 19, 2005 at 03:50:30PM -0500, Joel Fradkin wrote:

> I get ERROR:  row is too big: size 9856, maximum size 8136 when inserting
a
> view?

Could you post the smallest possible self-contained example that
demonstrates this behavior?  What version of PostgreSQL are you
using?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/
CREATE OR REPLACE VIEW  thenetworkcallbackview
as

select i.clientnum, i.incidentid, incidentnum, incidenttypeid, 
incidentdate, subjectid, subjecttypeid, locationid, suspectedactivity.value as 
description, situation, policenotified, injuries, 
  injurednum, injuriesdescription, propertydamage, 
damagelevelid, incidentreport, i.transfered, i.transftypeid, i.transfdate, 
i.transfbyid, i.transfnum, 
  lastmodified, createdbyid,i.isdeleted, workflowstatus, 
cashloss, merchloss, totloss, markettypeid, typeofweaponid, alarmid, cameraid, 
   escalated, transcount, orgcalldate_time, anoncaller, 
callernamefirst as firstname, callernamelast as lastname, callernamefirst || ' 
' || callernamelast as callernamelast, callertype, callertitle, 
  callerphone, callerext, callerbesttimetocall, 
calleremail, clientname, location, dba, address1, address2, city, state, zip, 
country, phone, ext, 
  abuselocation, casesource.value as hoiwincdknown, 
supportdocsavailable, next24hours, nextwhen, nextwhere, howhotlineknown, 
interviewernotes, 
  clientinstructions,
 case 
when i.statusid is null then 'none'
when i.statusid = 1 then 'open'
when i.statusid = 2 then 'closed'
end  as status,i.assignedto,
cb.callbackdate, substring(cb.callbacknotes,1,20) as callbacknotes

from tblincident i
 inner join  tblincidentcallback cb on i.incidentnum = cb.incidentid and 
i.clientnum = cb.clientnum
left outer join tblcasesource casesource on i.inccasesourceid = casesource.id  
and (i.clientnum=casesource.clientnum)
left outer join tblsuspectedactivity suspectedactivity on 
i.incsuspectedactivityid = suspectedactivity.id  and 
(i.clientnum=suspectedactivity.clientnum);

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] pg_user relationship does not exist

2005-01-20 Thread Joel Fradkin








Not sure this is the correct place to ask, but when I log in
with pgadminIII I get that message.

 

Any ideas how to fix? Do I need to redo my data base or
something?

 

Joel Fradkin



 



Wazagua, LLC
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, LLC. All rights reserved. WAZAGUA, LLC
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136

2005-01-20 Thread Joel Fradkin
Sorry, that was the statement that caused the error.
I was creating a view that exists in the MSSQL land.
It actually joins a few tables. I can put a create statement for all the
tables used in and then create the view and re send the txt file with those.
I am reloading the LINUX from scratch at the moment, but as soon as I get
back up (be tomorrow probably as it takes over night to load the data from
the MSSQL server) I will email with all the pertinent information.

I am re-loading to hopefully get rid of the pg_user error I was getting (I
went to su postgres and created my data base that way after creating a
postgres user as root). My friend said to not create any users just start
the data base up (Fedora core 3) and use pgadmin to create the database.

I was following a how to convert I got off the archives, so I must of messed
something up.

Again thank you for the information. If it is non TOAST (sorry not sure what
that means; I am guessing like not part of a text field data) field sizes
adding up to more the 8k is there some way to produce the data set, or is
this a limit of Postgres in general. If I can not have all the data needed
in a recordset I might have to re-think using postgres is this a limit of
mysql also? I hate to think I have to consider staying on MSSQL as it is not
in our budget.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 
-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 20, 2005 11:33 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136

On Thu, Jan 20, 2005 at 08:56:12AM -0500, Joel Fradkin wrote:

> I am enclosing a text file if this is not the correct manner let me know
> whats best way its not a lot of lines.

The file you attached contains a view definition but it doesn't
show the underlying tables, nor the statement that resulted in the
error.  By "self-contained example" I mean enough statements that
somebody could copy them into an empty database and reproduce the
problem.

The error "row is too big: size 9856, maximum size 8136" gives a
clue at what's wrong but I'm not sure what circumstances cause it,
because the TOAST mechanism allows rows to be larger than a page.
This is just a guess, but maybe the error means that the non-TOASTABLE
data is exceeding the page size.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136

2005-01-20 Thread Joel Fradkin
Could very well be an install issue I was getting errors trying to see
template1. I am in the process of re-installing Linux and will let you know
if I still have the error what I get from the select you asked me to run.

I appreciate everyones help. 

If anyone has an interest in the .net utility I wrote to pull the tables
schema and data let me know. I used SQLDMO to have the script text available
and then converted it to postgres syntax. I automated the creation and move
of the data including the text fields(it runs a little slow as it does a
read and write at a table row level, but this seemed the best way to get the
text fields to move over). The views and procedures I am afraid I will have
to use the list all views syntax and convert by hand as stuff like left and
datediff would be difficult to auto-convert. I did create a left and right
function but could see a performance hit for each use of function and feel
it will be better to just convert the SQL (the hit was only milisecs on
first number I guess the prepare part, but still might as well have it be as
fast as possible).

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 20, 2005 3:38 PM
To: Joel Fradkin
Cc: 'Michael Fuhr'; pgsql-sql@postgresql.org
Subject: Re: [SQL] ERROR: row is too big: size 9856, maximum size 8136 

"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> Sorry, that was the statement that caused the error.

Hmm.  The error is associated with trying to store an oversized row.
And CREATE VIEW doesn't store any rows ... except into system catalogs.
So the only theory I can think of is that the pg_rewrite row for the
view is exceeding 8K.  Which can't happen, because no matter how
complicated the view definition rule is, the tuple toaster should have
sprung into action and pushed the rule text out-of-line.

Could we see the results of

select * from pg_class where relname = 'pg_rewrite';

select attname,atttypid::regtype,attstorage from pg_attribute where
  attrelid = 'pg_rewrite'::regclass and attnum > 0;

7.4 should certainly be configured to have a toast table for pg_rewrite,
but maybe something went wrong during initdb on your installation.

regards, tom lane


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] still having pg_user error

2005-01-20 Thread Joel Fradkin








Well I re-installed Fedora 3 and updated my postgres and
started the service (no command line stuff).

When I connected using pgadmin I still got the sp_user
error.

I am brand new to both linux and postgres and am a bit
frustrated as I have spent the entire day getting no where.

Is there a step by step instruction for using postgres on
fedora 3 anywhere?

 

What I did seemed simple enough using the graphical
interface I added the postgres components and put in CD3 then I used the
up2date thing, then I started the service.

 

Obviously I am missing a important step (I realize I have to
edit the ph_hba.conf file, but it was not there until after I started the service.
I stopped the service and added a host line and edited postgres.conf to allow tcpip
sockets.

 

I hate to not get further tonight as it takes all night to
move the data and I wanted to be moving views tomorrow, but I guess I will just
have to try again tomorrow.

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








Re: [SQL] still having pg_user error

2005-01-21 Thread Joel Fradkin
Tom thank you for the reply.

I believe I turned all that off during the install (no firewall).
But I can try again.
Is there an action I can take now since it is installed and in theory the
RPM's are up2date. Sorry I have a few books on postgres Oreily and Sams, but
it seems a bit deep on install.

At the very least I guess I can do a backup of the database, wipe everything
/var/lib/pgsql/data and try the initd thing (verify SELinux is off; not sure
how I do that, but I will look at the command you mentioned). 

I am very happy with my progress thus far (have most of my data coming
over), but I am frustrated on my lack of knowledge. I have administered
Linux before but its been several years and I don't remember much.

I am very impressed with the progress with admin tools and such, I just wish
I had a clear idea of the path to get a development postgres box up on
fedora and am more then a little worried about administering the production
(we ordered a 4 processor Dell with Red HAT, I believe I will also need to
install it). 

Again thank you for that piece of advice, I can always re-install Linux and
verify the SELinux option is off, maybe I messed up again.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 21, 2005 12:48 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] still having pg_user error 

"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> Well I re-installed Fedora 3 and updated my postgres and started the =
> service
> (no command line stuff).

> When I connected using pgadmin I still got the sp_user error.

Fedora 3?  You should've mentioned that before.  I'll bet you've got
SELinux enabled in enforcement mode.  SELinux interferes with initdb
and I believe a missing pg_user view is one of the symptoms.

This is fixed in the very latest postgresql and selinux-policy-targeted
RPMs (u need both), but the path of least resistance is often to dial
SELinux down to permissive mode (sudo /usr/sbin/setenforce 0) for long
enough to do the initdb.

If you're running SELinux in strict rather than targeted policy, good
luck ... I honestly haven't tried that one at all ...

regards, tom lane


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] still having pg_user error

2005-01-21 Thread Joel Fradkin
Finally I think that did the trick.
I did see the GUI editor for security and set SELinux to off and re-booted.
I ran the rpm text you gave me
I can use pgadmin with no errors, now for another 12 hours of transferring
data :( so I can work on the views and stored procedures.

This list is a great resource and I appreciate all the ideas comments and
help it has made the difference for me.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 21, 2005 9:17 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] still having pg_user error 

"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> Is there an action I can take now since it is installed and in theory the
> RPM's are up2date. Sorry I have a few books on postgres Oreily and Sams,
but
> it seems a bit deep on install.

Sure, you can use the RPMs you have.  You want something like (as root)

service postgresql stop
rm -rf /var/lib/pgsql/data
setenforce 0
service postgresql start

The start script will observe that there's no database directory and
proceed to run initdb for you.


regards, tom lane


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] editors with colum positioning for debugging?

2005-01-22 Thread Joel Fradkin
Hi I am doing my work in windows attached to a postgres server on linux
fedora core3.
I have been using pgadminIII and amd am trying the demo of EMS.

My question is I don’t see an easy way to find an error.
Typical error text is syntax error at or near "trans" at character 825

But determining character 825 is a little slow by hand.

Any of you guru's have some ideas you use (I am sure you don’t get the
number of errors I am).

By the way my issues were caused by SELinux not being turned off, I turned
it off and re-did my postgres rpm and all is well (no pg_user error).
I was also able to add the view that had given me the row too large error.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] editors with colum positioning for debugging?

2005-01-22 Thread Joel Fradkin
Very cool (I tried to get 8 on fedora, but think I was missing it somehow.).
I also noticed if I run in EMS (course I need to buy it)it positions me on
the error if I am in the editor not the builder.

I am a little afraid of using 8 as it is brand new, and this will be on a
production machine. I could use 8 for development and 7 for production I
guess.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Saturday, January 22, 2005 5:21 PM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] editors with colum positioning for debugging? 

"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> My question is I don't see an easy way to find an error.
> Typical error text is syntax error at or near "trans" at character 825
> But determining character 825 is a little slow by hand.

PG 8.0 produces more useful output.  A trivial example:

regression=# select
regression-#1/0 as a,
regression-#1/ as b,
regression-#1/2 as c;
ERROR:  syntax error at or near "as" at character 27
LINE 3:1/ as b,
  ^
regression=#

regards, tom lane


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] datediff is there something like it?

2005-01-25 Thread Joel Fradkin








Hi all working my way through our views and all is going
very well.

We use datediff in MSSQL a bit and I read about the
field1::date – field2::date to return the days numerically.

Is there any way to get months and years besides guessing
days / 30 for months etc?

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








Re: [SQL] datediff is there something like it?

2005-01-25 Thread Joel Fradkin
Yes I am using datepart, but not seeing how with a datediff.
Maybe I just need to do the date math and put a date part on the result.
I will play around a bit when I hit one not looking for days.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Achilleus Mantzios [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 25, 2005 11:34 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] datediff is there something like it?

O Joel Fradkin έγραψε στις Jan 25, 2005 :

> Hi all working my way through our views and all is going very well.
> 
> We use datediff in MSSQL a bit and I read about the field1::date -
> field2::date to return the days numerically.
> 
> Is there any way to get months and years besides guessing days / 30 for
> months etc?

Go to 
http://www.postgresql.org/docs/7.4/interactive/functions-datetime.html
Check out smth like...

SELECT 'Achilleus is ' || date_part('years',age(timestamp '1969-01-31')) 
|| ' years and ' || date_part('months',age(timestamp '1969-01-31')) || ' 
months old';
?column?
---------
 Achilleus is 35 years and 11 months old
(1 row)



> 
>  
> 
> Joel Fradkin
> 
>  
> 
> Wazagua, Inc.
> 2520 Trailmate Dr
> Sarasota, Florida 34243
> Tel.  941-753-7111 ext 305
> 
>  
> 
> [EMAIL PROTECTED]
> www.wazagua.com
> Powered by Wazagua
> Providing you with the latest Web-based technology & advanced tools.
> C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
>  This email message is for the use of the intended recipient(s) and may
> contain confidential and privileged information.  Any unauthorized review,
> use, disclosure or distribution is prohibited.  If you are not the
intended
> recipient, please contact the sender by reply email and delete and destroy
> all copies of the original message, including attachments.
> 
>  
> 
> 
>  
> 
>  
> 
> 

-- 
-Achilleus


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] same question little different test MSSQL vrs Postgres

2005-01-25 Thread Joel Fradkin








I also tried a simple select * from tblcase where clientum =
‘SAKS’

On both MSSQL and Postgres.

MSSQL was 3 secs, Postgres was 27 secs.

 

There is a key for clientnum, but it appeared on both
systems (identical Dell Desktops Postgres is running Linux MSSQL is XP) it did
not do a indexed search.

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








[SQL] Sorry I see my first question did not get posted (maybe because of the attatchments)

2005-01-25 Thread Joel Fradkin
Basically the question was why would a view use an indexed search on one
result set but a seq search on a larger result set. Same view only
difference is how many rows are returned. The large result set was doing a
seq search and did not return after several minutes. The same sql ran in 135
seconds on my MSSQL system.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] same question little different test MSSQL vrs Postgres

2005-01-26 Thread Joel Fradkin
QUERY PLAN
"Seq Scan on tblcase  (cost=0.00..30066.21 rows=37401 width=996) (actual
time=0.344..962.260 rows=22636 loops=1)"
"  Filter: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 1034.434 ms"

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 26, 2005 1:27 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] same question little different test MSSQL vrs Postgres


"Joel Fradkin" <[EMAIL PROTECTED]> writes:

> I also tried a simple select * from tblcase where clientum = 'SAKS'

Try:

explain analyze select * from tblcase where clientum = 'SAKS'

Send the output.

-- 
greg


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] same question little different test MSSQL vrs Postgres

2005-01-26 Thread Joel Fradkin
Well last evening (did not try it this morning) it was taking the extra
time.

I have made some adjustments to the config file per a few web sites that you
all recommended my looking at.

It is now using 137 of 756 meg avail.
it is now taking 8 secs to return 22,000 rows (using pgadminIII in a sql
edit window).

The EXPLAIN ANALYSE still shows the same as below, but the table has 344,000
recs of which only 22636 are clientnum = 'SAKS'

I am still doing a seq search (this applies to the view question where if it
is a small result set it used a index search but on a larger return set it
did a seq search) in my view, but with the adjustments to the kernel I get a
result in 140 secs (MSSQL was 135 secs).

This is not production, I am still very worried that I have to do all this
tweeking to use this, MSSQL worked out of the box as it does (not saying its
great, but I never had to adjust a kernel setting etc). Since we cannot
afford the 70,000 dollars they want to license it I am not implying I can
use MSSQL, but I could look at other DB's like MYSQL, or Firebird, etc.

I have a lot of time now (two weeks) in this conversion and do not wish to
give up, I will see if I can learn what is needed to get the maximum
performance. I have seen much information available and this list has been a
huge resource. I really appreciate all the help.


Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
> QUERY PLAN
> "Seq Scan on tblcase  (cost=0.00..30066.21 rows=37401 width=996) (actual
> time=0.344..962.260 rows=22636 loops=1)"
> "  Filter: ((clientnum)::text = 'SAKS'::text)"
> "Total runtime: 1034.434 ms"

That's only 1 second - to return 22,636 rows. Not 27 seconds, as in the 
original post. You'll never persuade PG to use the index when some 75% 
of your rows match the filter - it just doesn't make sense.

--
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] same question little different test MSSQL vrs Postgres

2005-01-26 Thread Joel Fradkin
The postgres is running on Linux Fedora core 3 (production will be redhat on
Dell 4 proc 8 gig box).

My client pgadminIII is running on XP.

Sorry I was not clearer on this.

I am playing with the settings now, I got it to return in 100 secs (the view
that is that took 135 on MSSQL). My testing is using identical Dell desktops
for the MSSQL and the Linux, with a third machine for the clients.

I do not mind getting up to speed on the proper setting to optimize the
hardware, I am worried that as production environment can be somewhat
dynamic that I will have issues getting a optimized environment and that it
will work for our needs. My whole reason for being here is that our duel
proc production MSSQL server is just no longer keeping up with the demand,
so it is important that whatever I implement is going to up to the
challenge. I am still convinced Postgres was the correct choice, especially
with all the guidance I have been able to get here. 100 seconds will be fine
compared to the 135 of MSSQL, I just was getting worse responses before
adjusting. At the moment I think I went too far as I see it using swap and
going slower, but it never used much of the 756 meg (137 max was all I ever
saw it use).

I guess the swap buffers and cache are the important settings (least that
seems to be what is affecting the memory). Not sure exactly what would cause
it to use seq vrs index, but I will try the force and see if it helps the
speed.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 26, 2005 10:21 AM
To: Joel Fradkin
Cc: [EMAIL PROTECTED]; pgsql-sql@postgresql.org
Subject: Re: [SQL] same question little different test MSSQL vrs Postgres

Joel Fradkin wrote:
> Well last evening (did not try it this morning) it was taking the extra
> time.
> 
> I have made some adjustments to the config file per a few web sites that
you
> all recommended my looking at.

The crucial one I'd say is the performance guide at:
   http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
The first half-dozen settings are the crucial ones.

> It is now using 137 of 756 meg avail.
> it is now taking 8 secs to return 22,000 rows (using pgadminIII in a sql
> edit window).

That might be too much RAM. Don't forget PG likes to work with your 
operating-system (unlike many other DBs). Make sure Windows is using 
enough RAM to cache diskspace.
I'm curious as to how this takes 8secs whereas you had 1 second earlier. 
Are you sure some of this isn't pgadmin's overhead to display the rows?

> The EXPLAIN ANALYSE still shows the same as below, but the table has
344,000
> recs of which only 22636 are clientnum = 'SAKS'

That sounds like it's about the borderline between using an index and 
not (depending on cache-size, disk speeds etc).

> I am still doing a seq search (this applies to the view question where if
it
> is a small result set it used a index search but on a larger return set it
> did a seq search) in my view, but with the adjustments to the kernel I get
a
> result in 140 secs (MSSQL was 135 secs).

If you want to check whether the index would help, try issuing the 
following before running your query:
   SET ENABLE_SEQSCAN=FALSE;
This will force PG to use any index it can regardless of whether it 
thinks it will help.

> This is not production, I am still very worried that I have to do all this
> tweeking to use this, MSSQL worked out of the box as it does (not saying
its
> great, but I never had to adjust a kernel setting etc). Since we cannot
> afford the 70,000 dollars they want to license it I am not implying I can
> use MSSQL, but I could look at other DB's like MYSQL, or Firebird, etc.

I'm a little curious what kernel settings you are changing on Windows. I 
wasn't aware there was much to be done there.

I'm afraid you do have to change half a dozen settings in 
postgresql.conf to match your workload, but PG runs on a much wider 
range of machines than MSSQL so it's difficult to come up with a 
"reasonable" default. Takes me about 5 minutes when I setup an 
installation to make sure the figures are reasonable (rather than the 
best they can be).

> I have a lot of time now (two weeks) in this conversion and 

Re: [SQL] same question little different test MSSQL vrs Postgres

2005-01-26 Thread Joel Fradkin
I tried the SET ENABLE_SEQSCAN=FALSE;
And the result took 29 secs instead of 117.

After playing around with the cache and buffers etc I see I am no longer
doing any swapping (not sure how I got the 100 sec response might have been
shared buffers set higher, been goofing around with it all morning).

My worry here is it should obviously use an index scan so something is not
setup correctly yet. I don't want to second guess the analyzer (or is this a
normal thing?)

Least it is blowing the doors off MSSQL (which is what I touted to my boss
and was pretty upset when I got no result last night).

The 117 was before I forced the seq off so even doing a seq I am getting
results now that are better then MSSQL.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 26, 2005 10:21 AM
To: Joel Fradkin
Cc: [EMAIL PROTECTED]; pgsql-sql@postgresql.org
Subject: Re: [SQL] same question little different test MSSQL vrs Postgres

Joel Fradkin wrote:
> Well last evening (did not try it this morning) it was taking the extra
> time.
> 
> I have made some adjustments to the config file per a few web sites that
you
> all recommended my looking at.

The crucial one I'd say is the performance guide at:
   http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
The first half-dozen settings are the crucial ones.

> It is now using 137 of 756 meg avail.
> it is now taking 8 secs to return 22,000 rows (using pgadminIII in a sql
> edit window).

That might be too much RAM. Don't forget PG likes to work with your 
operating-system (unlike many other DBs). Make sure Windows is using 
enough RAM to cache diskspace.
I'm curious as to how this takes 8secs whereas you had 1 second earlier. 
Are you sure some of this isn't pgadmin's overhead to display the rows?

> The EXPLAIN ANALYSE still shows the same as below, but the table has
344,000
> recs of which only 22636 are clientnum = 'SAKS'

That sounds like it's about the borderline between using an index and 
not (depending on cache-size, disk speeds etc).

> I am still doing a seq search (this applies to the view question where if
it
> is a small result set it used a index search but on a larger return set it
> did a seq search) in my view, but with the adjustments to the kernel I get
a
> result in 140 secs (MSSQL was 135 secs).

If you want to check whether the index would help, try issuing the 
following before running your query:
   SET ENABLE_SEQSCAN=FALSE;
This will force PG to use any index it can regardless of whether it 
thinks it will help.

> This is not production, I am still very worried that I have to do all this
> tweeking to use this, MSSQL worked out of the box as it does (not saying
its
> great, but I never had to adjust a kernel setting etc). Since we cannot
> afford the 70,000 dollars they want to license it I am not implying I can
> use MSSQL, but I could look at other DB's like MYSQL, or Firebird, etc.

I'm a little curious what kernel settings you are changing on Windows. I 
wasn't aware there was much to be done there.

I'm afraid you do have to change half a dozen settings in 
postgresql.conf to match your workload, but PG runs on a much wider 
range of machines than MSSQL so it's difficult to come up with a 
"reasonable" default. Takes me about 5 minutes when I setup an 
installation to make sure the figures are reasonable (rather than the 
best they can be).

> I have a lot of time now (two weeks) in this conversion and do not wish to
> give up, I will see if I can learn what is needed to get the maximum
> performance. I have seen much information available and this list has been
a
> huge resource. I really appreciate all the help.

--
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] same question little different test MSSQL vrs Postgres

2005-01-26 Thread Joel Fradkin
Thank you I will look at that info.
I did do an EXPLAIN ANALYSE on the view and could see it was doing the seq
scan on 3 fields, so I did an index for the three fields and it then chose
an index scan and ran in 27 seconds.

I also did adjust my defaults to much smaller numbers on shared buffers (per
the tidbits page recommendation like 8 meg for my memory size). I looked at
http://www.desknow.com/kb/idx/0/061/article/ which recommended doing a
vacuum verbose to determine the exact max_fsm_pages and I set the cache to
use 25% of my available memory per the recommendation on tid bits.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 26, 2005 11:50 AM
To: Joel Fradkin
Cc: 'Richard Huxton'; [EMAIL PROTECTED]; pgsql-sql@postgresql.org;
[EMAIL PROTECTED]; Steve Goldsmith
Subject: Re: [SQL] same question little different test MSSQL vrs Postgres

"Joel Fradkin" <[EMAIL PROTECTED]> writes:

> I tried the SET ENABLE_SEQSCAN=FALSE;
> And the result took 29 secs instead of 117.
> 
> After playing around with the cache and buffers etc I see I am no longer
> doing any swapping (not sure how I got the 100 sec response might have
been
> shared buffers set higher, been goofing around with it all morning).

If it's swapping you're definitely going to get bad results. You really want
the *majority* of RAM left free for the OS to cache disk data.

> My worry here is it should obviously use an index scan so something is not
> setup correctly yet. I don't want to second guess the analyzer (or is this
a
> normal thing?)

No that's not obvious. 22k out of 344k is a selectivity of 6.6% which is
probably about borderline. The optimizer is estimating even worse at 10.9%
which isn't far off but puts it well out of the range for an index scan.

If you really want to get postgres using an index scan you'll have to a)
improve the estimate using "alter table tblcase alter column clientnum set
statistics" to raise the statistics target for that column and reanalyze. 

And b) lower random_page_cost. random_page_cost tells postgres how much
slower
indexes are than table scans and at the default setting it accurately
represents most disk hardware. If your database fits within RAM and is often
cached then you might have to lower it to model that fact. But you shouldn't
do it based on a single query like this.


-- 
greg


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] same question little different test MSSQL vrs Postgres

2005-01-27 Thread Joel Fradkin

Now you tell me.
We had a fellow working here kept screaming AMD, but I am a very paranoid
person and was not aware Linux and Postgres have been running on the new
chips. I don't like to be a first. We have bought the Dell and I cant tell
you if the controller uses 64bits, I just got what they had on their page
for their 4 proc rack mount. Part of my reason for going Dell was we already
have Dell equipment and the Linux support is offered from Dell as well, so I
have one vendor to worry about. Being a developer and Director of IT I want
the fastest best, but sometimes I flavor my opinions with safest and
easiest. The RPM delivery is something I understand (it's easy). What is SU
like? Is there any difference in the performance between the two Vendors?
I am sure we will be buying more Postgres servers in the near future (One of
the big reasons we are taking the time to convert from MSSQL was so we could
afford to invest in more servers MSSQL was cost prohibitive even for one
server). As easy as Fedura was I still had several issues getting to where I
am now, so I am paranoid of something that requires even more knowledge to
pull it off; that being said I never minded getting into the details to get
a better end result. As you said we have made the investment in the Dell
(25K). I feel pretty stupid if it is as you say a waste of money to get 8
gigs on this platform as I just made that same mistake a year ago when I
bought the 2 processor boxes with standard addition MSSQL and 4 gigs (It
only uses 2 gig). I was under the impression this machine would utilize all
8 gigs. Are you saying only 4 will be available for caching etc, or just the
chipset cant deal with numbers 8 gig and will be slower to access them? If
it is the later then I would imagine it would still outperform a similar box
with 4 gig assuming my demand on cache is larger then 4 gig.

Just to confirm you have these quad Opteron (I am assuming a 4 processor
config?) in a production environment running su and postgres with hardware
support from HP and software from su? You indicate three separate physical
drives will give best performance (one for data 10K speeds, one for admin,
one for wall 15 speed)? I am not too sophisticated at knowing how to irder
this arrangement and set it up in Linux, any chance you could detail (1 card
with 2 channels 4 10k drives on one channel, 2 15k drives on the second, do
I need another channel and drive(s) for admin files?), drive layout when
installing config in postgres to utilize? 

If need be maybe we can get you to do this as a consultant as I do
understand how important the hardware and the proper config is.  I found out
too late with MSSQL that I should have used two seprate drive arrays, one
for data, one for log (this would have required the split back plane).

So not to plug a specific vendor but if you have production environment
example with real equipment suggestions I would be very appreciative.

I know that's a lot to ask so if you don't have time that's cool, thanks so
much for bringing this up so that my next purchase I will seriously look at
quad Opteron technology if it is a tried and true solution for this OS and
Postgres.

Joel Fradkin
 


-Original Message-
From: Andrew Hammond [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 26, 2005 5:16 PM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] same question little different test MSSQL vrs Postgres

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

We've done some pretty extensive benchmarking and load testing on a
couple of platforms including the Xeon and Opteron. You may have already
bought that Dell box, but I'll say it anyway. Xeon quad processors are a
terrible platform for postgres. Trying to use more than 4GB of memory on
a 32 bit machine is a waste of money.

If you want performance, get a quad Opteron with the same amount of
memory. I guarantee you'll see at least an order of magnitude
performance improvement and substantially more under highly concurrent
loads. If you decide to go this way, HP sells a very nice box. I also
strongly recommend you investigate SuSE instead of RedHat. Fedora core
is good technology, but SuSE offers equally good technology with better
support.

Also make sure that your SCSI HBA is actually using the 64 bit PCI bus.
There are cards out there which plug into 64 bit PCI but only actually
address 32 bits (Qlogic's QLA2340 / 2342 for example).

You make no mention of the disk subsystem you plan to use. This is most
critical part of your system. Database performance is almost always
bound by IO. Usually disk IO. Briefly, put PGDATA on the widest RAID 10
array of disks you can manage. It's not worth spending the extra money
to get 15kRPM disks for this. The size of the disks involved is pretty
much irrelevant, only the number of them matters. Put the WAL files on a
dedicated RAID 1 pair of 15kRPM disks. Put the postgres log files (or
sysl

[SQL] Hardware for best performance was same question little different test MSSQL vrs Postgres

2005-01-27 Thread Joel Fradkin
Subject: RE: [SQL] same question little different test MSSQL vrs Postgres


Now you tell me.
We had a fellow working here kept screaming AMD, but I am a very paranoid
person and was not aware Linux and Postgres have been running on the new
chips. I don't like to be a first. We have bought the Dell and I cant tell
you if the controller uses 64bits, I just got what they had on their page
for their 4 proc rack mount. Part of my reason for going Dell was we already
have Dell equipment and the Linux support is offered from Dell as well, so I
have one vendor to worry about. Being a developer and Director of IT I want
the fastest best, but sometimes I flavor my opinions with safest and
easiest. The RPM delivery is something I understand (it's easy). What is SU
like? Is there any difference in the performance between the two Vendors?
I am sure we will be buying more Postgres servers in the near future (One of
the big reasons we are taking the time to convert from MSSQL was so we could
afford to invest in more servers MSSQL was cost prohibitive even for one
server). As easy as Fedura was I still had several issues getting to where I
am now, so I am paranoid of something that requires even more knowledge to
pull it off; that being said I never minded getting into the details to get
a better end result. As you said we have made the investment in the Dell
(25K). I feel pretty stupid if it is as you say a waste of money to get 8
gigs on this platform as I just made that same mistake a year ago when I
bought the 2 processor boxes with standard addition MSSQL and 4 gigs (It
only uses 2 gig). I was under the impression this machine would utilize all
8 gigs. Are you saying only 4 will be available for caching etc, or just the
chipset cant deal with numbers 8 gig and will be slower to access them? If
it is the later then I would imagine it would still outperform a similar box
with 4 gig assuming my demand on cache is larger then 4 gig.

Just to confirm you have these quad Opteron (I am assuming a 4 processor
config?) in a production environment running su and postgres with hardware
support from HP and software from su? You indicate three separate physical
drives will give best performance (one for data 10K speeds, one for admin,
one for wall 15 speed)? I am not too sophisticated at knowing how to irder
this arrangement and set it up in Linux, any chance you could detail (1 card
with 2 channels 4 10k drives on one channel, 2 15k drives on the second, do
I need another channel and drive(s) for admin files?), drive layout when
installing config in postgres to utilize? 

If need be maybe we can get you to do this as a consultant as I do
understand how important the hardware and the proper config is.  I found out
too late with MSSQL that I should have used two seprate drive arrays, one
for data, one for log (this would have required the split back plane).

So not to plug a specific vendor but if you have production environment
example with real equipment suggestions I would be very appreciative.

I know that's a lot to ask so if you don't have time that's cool, thanks so
much for bringing this up so that my next purchase I will seriously look at
quad Opteron technology if it is a tried and true solution for this OS and
Postgres.

Joel Fradkin
 


-Original Message-
From: Andrew Hammond [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 26, 2005 5:16 PM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] same question little different test MSSQL vrs Postgres

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

We've done some pretty extensive benchmarking and load testing on a
couple of platforms including the Xeon and Opteron. You may have already
bought that Dell box, but I'll say it anyway. Xeon quad processors are a
terrible platform for postgres. Trying to use more than 4GB of memory on
a 32 bit machine is a waste of money.

If you want performance, get a quad Opteron with the same amount of
memory. I guarantee you'll see at least an order of magnitude
performance improvement and substantially more under highly concurrent
loads. If you decide to go this way, HP sells a very nice box. I also
strongly recommend you investigate SuSE instead of RedHat. Fedora core
is good technology, but SuSE offers equally good technology with better
support.

Also make sure that your SCSI HBA is actually using the 64 bit PCI bus.
There are cards out there which plug into 64 bit PCI but only actually
address 32 bits (Qlogic's QLA2340 / 2342 for example).

You make no mention of the disk subsystem you plan to use. This is most
critical part of your system. Database performance is almost always
bound by IO. Usually disk IO. Briefly, put PGDATA on the widest RAID 10
array of disks you can manage. It's not worth spending the extra money
to get 15kRPM disks for this. The size of the disks involved is pretty
much irrelevant, only the number of them matters. Put the WAL files o

[SQL] hardware mod based on feedback from the list

2005-01-27 Thread Joel Fradkin
Thanks so much to everyone who jumped in both on a config and now the
hardware side of things.

I did modify my order on the new equipment to include a powervault 220 (just
a bunch of drives)

I added a controller card and 4 10 k drives for data raid 10 and 2 15 k
drives raid 1 for WAL. I will use the array (internal to the 6650) that the
OS is on for the syslog.

I was told the Xeon processors will do fine up to 64gig. I realize the 64bit
chips may be faster, but it is also new and I feel safer with existing
technologies and hardware vendors.

My understanding is that CPU (4 Xeon 3gig processors) will not be a issue,
but hopefully adding the additional drive systems will help the most for IO
is what I am told is the big issue and hopefully utilizing the
recommendation will help minimize the bottleneck. 

Joel Fradkin
 
 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] MSSQL versus Postgres timing

2005-02-01 Thread Joel Fradkin








All is moving along well.

I have all my views and data and am testing things out a
bit.

A table with 645,000 records for associates has view (basically
select * from tblassociates where clientnum = ‘test’)

This is taking 13 seconds in postgres and 3 seconds in
MSSQL.

 

I tried making an index on clientnum (there is one on clinetnum,
associateID in MSSQL).

I did an analyze.

I did a set seq scan off

 

Is there a possibility postgres is this much slower, or (my
hope) I am missing some key concept.

Be glad to provide the view and tables etc.

 

Joel Fradkin



 



 








Re: [SQL] MSSQL versus Postgres timing

2005-02-01 Thread Joel Fradkin
QUERY PLAN
"Merge Join  (cost=47489.81..47975.65 rows=3758 width=111) (actual
time=27167.305..29701.080 rows=85694 loops=1)"
"  Merge Cond: (""outer"".locationid = ""inner"".locationid)"
"  ->  Sort  (cost=1168.37..1169.15 rows=312 width=48) (actual
time=261.096..262.410 rows=402 loops=1)"
"Sort Key: l.locationid"
"->  Index Scan using ix_tbllocation on tbllocation l
(cost=0.00..1155.44 rows=312 width=48) (actual time=213.107..259.160
rows=402 loops=1)"
"  Index Cond: ('SAKS'::text = (clientnum)::text)"
"  ->  Sort  (cost=46321.45..46535.47 rows=85611 width=74) (actual
time=26906.148..27689.258 rows=85695 loops=1)"
"Sort Key: a.locationid"
"->  Merge Right Join  (cost=38119.24..39307.55 rows=85611 width=74)
(actual time=22236.915..25384.945 rows=99139 loops=1)"
"  Merge Cond: (((""outer"".clientnum)::text =
""inner"".""?column10?"") AND (""outer"".id = ""inner"".jobtitleid))"
"  ->  Index Scan using ix_tbljobtitle_id on tbljobtitle jt
(cost=0.00..338.90 rows=6337 width=37) (actual time=164.976..2290.760
rows=5662 loops=1)"
"Filter: (1 = presentationid)"
"  ->  Sort  (cost=38119.24..38333.26 rows=85611 width=52)
(actual time=20667.645..21031.627 rows=99139 loops=1)"
"    Sort Key: (a.clientnum)::text, a.jobtitleid"
"->  Seq Scan on tblassociate a  (cost=0.00..31105.34
rows=85611 width=52) (actual time=14.768..16024.395 rows=99139 loops=1)"
"  Filter: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 30319.859 ms"

Joel Fradkin
 

-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 01, 2005 11:59 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] MSSQL versus Postgres timing

On Tue, 2005-02-01 at 10:54, Joel Fradkin wrote:
> All is moving along well.
> 
> I have all my views and data and am testing things out a bit.
> 
> A table with 645,000 records for associates has view (basically select
> * from tblassociates where clientnum = 'test')

What does 

explain analyze select * from tblassociates where clientnum = 'test' 

say?


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] MSSQL versus Postgres timing

2005-02-01 Thread Joel Fradkin
With seq scan on.
-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 01, 2005 12:07 PM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] MSSQL versus Postgres timing

On Tue, Feb 01, 2005 at 11:54:11AM -0500, Joel Fradkin wrote:
> 
> A table with 645,000 records for associates has view (basically select *
> from tblassociates where clientnum = 'test')
> 
> This is taking 13 seconds in postgres and 3 seconds in MSSQL.

Please post the EXPLAIN ANALYZE output for the slow query, once
with enable_seqscan on and once with it off.  For example:

SET enable_seqscan TO on;  -- if not already on
EXPLAIN ANALYZE SELECT * FROM tblassociates WHERE clientnum = 'test';

SET enable_seqscan TO off;
EXPLAIN ANALYZE SELECT * FROM tblassociates WHERE clientnum = 'test';

> Be glad to provide the view and tables etc.

Please do -- it might help us spot something that could be improved.
What version of PostgreSQL are you using?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/
QUERY PLAN
"Merge Join  (cost=47489.81..47975.65 rows=3758 width=111) (actual 
time=27167.305..29701.080 rows=85694 loops=1)"
"  Merge Cond: (""outer"".locationid = ""inner"".locationid)"
"  ->  Sort  (cost=1168.37..1169.15 rows=312 width=48) (actual 
time=261.096..262.410 rows=402 loops=1)"
"Sort Key: l.locationid"
"->  Index Scan using ix_tbllocation on tbllocation l  
(cost=0.00..1155.44 rows=312 width=48) (actual time=213.107..259.160 rows=402 
loops=1)"
"  Index Cond: ('SAKS'::text = (clientnum)::text)"
"  ->  Sort  (cost=46321.45..46535.47 rows=85611 width=74) (actual 
time=26906.148..27689.258 rows=85695 loops=1)"
"Sort Key: a.locationid"
"->  Merge Right Join  (cost=38119.24..39307.55 rows=85611 width=74) 
(actual time=22236.915..25384.945 rows=99139 loops=1)"
"  Merge Cond: (((""outer"".clientnum)::text = 
""inner"".""?column10?"") AND (""outer"".id = ""inner"".jobtitleid))"
"  ->  Index Scan using ix_tbljobtitle_id on tbljobtitle jt  
(cost=0.00..338.90 rows=6337 width=37) (actual time=164.976..2290.760 rows=5662 
loops=1)"
"Filter: (1 = presentationid)"
"  ->  Sort  (cost=38119.24..38333.26 rows=85611 width=52) (actual 
time=20667.645..21031.627 rows=99139 loops=1)"
"Sort Key: (a.clientnum)::text, a.jobtitleid"
"->  Seq Scan on tblassociate a  (cost=0.00..31105.34 
rows=85611 width=52) (actual time=14.768..16024.395 rows=99139 loops=1)"
"  Filter: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 30319.859 ms"

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] MSSQL versus Postgres timing

2005-02-01 Thread Joel Fradkin


 

-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 01, 2005 12:07 PM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] MSSQL versus Postgres timing

On Tue, Feb 01, 2005 at 11:54:11AM -0500, Joel Fradkin wrote:
> 
> A table with 645,000 records for associates has view (basically select *
> from tblassociates where clientnum = 'test')
> 
> This is taking 13 seconds in postgres and 3 seconds in MSSQL.

Please post the EXPLAIN ANALYZE output for the slow query, once
with enable_seqscan on and once with it off.  For example:

SET enable_seqscan TO on;  -- if not already on
EXPLAIN ANALYZE SELECT * FROM tblassociates WHERE clientnum = 'test';

SET enable_seqscan TO off;
EXPLAIN ANALYZE SELECT * FROM tblassociates WHERE clientnum = 'test';

> Be glad to provide the view and tables etc.

Please do -- it might help us spot something that could be improved.
What version of PostgreSQL are you using?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/
QUERY PLAN
"Merge Join  (cost=109454.32..109940.16 rows=3758 width=111) (actual 
time=11646.990..13474.449 rows=85694 loops=1)"
"  Merge Cond: (""outer"".locationid = ""inner"".locationid)"
"  ->  Sort  (cost=1168.37..1169.15 rows=312 width=48) (actual 
time=35.359..36.651 rows=402 loops=1)"
"Sort Key: l.locationid"
"->  Index Scan using ix_tbllocation on tbllocation l  
(cost=0.00..1155.44 rows=312 width=48) (actual time=29.811..33.415 rows=402 
loops=1)"
"  Index Cond: ('SAKS'::text = (clientnum)::text)"
"  ->  Sort  (cost=108285.96..108499.98 rows=85611 width=74) (actual 
time=11611.560..11995.898 rows=85695 loops=1)"
"Sort Key: a.locationid"
"->  Merge Right Join  (cost=100083.75..101272.06 rows=85611 width=74) 
(actual time=7758.824..10598.571 rows=99139 loops=1)"
"  Merge Cond: (((""outer"".clientnum)::text = 
""inner"".""?column10?"") AND (""outer"".id = ""inner"".jobtitleid))"
"  ->  Index Scan using ix_tbljobtitle_id on tbljobtitle jt  
(cost=0.00..338.90 rows=6337 width=37) (actual time=0.089..30.193 rows=5662 
loops=1)"
"Filter: (1 = presentationid)"
"  ->  Sort  (cost=100083.75..100297.77 rows=85611 width=52) 
(actual time=7709.988..8543.451 rows=99139 loops=1)"
"Sort Key: (a.clientnum)::text, a.jobtitleid"
"->  Index Scan using ix_associate_clientnum on 
tblassociate a  (cost=0.00..93069.85 rows=85611 width=52) (actual 
time=0.281..2046.482 rows=99139 loops=1)"
"  Index Cond: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 13899.614 ms"

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] MSSQL versus Postgres timing

2005-02-01 Thread Joel Fradkin
View and table creates

CREATE TABLE tblassociate
(
  clientnum varchar(16) NOT NULL,
  associateid int4 NOT NULL,
  associatenum varchar(10),
  firstname varchar(50),
  middleinit varchar(5),
  lastname varchar(50),
  ssn varchar(18),
  dob timestamp,
  address varchar(100),
  city varchar(50),
  state varchar(50),
  country varchar(50),
  zip varchar(10),
  homephone varchar(14),
  cellphone varchar(14),
  pager varchar(14),
  associateaccount varchar(50),
  doh timestamp,
  dot timestamp,
  rehiredate timestamp,
  lastdayworked timestamp,
  staffexecid int4,
  jobtitleid int4,
  locationid int4,
  deptid int4,
  positionnum int4,
  worktypeid int4,
  sexid int4,
  maritalstatusid int4,
  ethnicityid int4,
  weight float8,
  heightfeet int4,
  heightinches int4,
  haircolorid int4,
  eyecolorid int4,
  isonalarmlist bool NOT NULL DEFAULT false,
  isactive bool NOT NULL DEFAULT true,
  ismanager bool NOT NULL DEFAULT false,
  issecurity bool NOT NULL DEFAULT false,
  createdbyid int4,
  isdeleted bool NOT NULL DEFAULT false,
  militarybranchid int4,
  militarystatusid int4,
  patrontypeid int4,
  identificationtypeid int4,
  workaddress varchar(200),
  testtypeid int4,
  testscore int4,
  pin int4,
  county varchar(50),
  CONSTRAINT pk_tblassociate PRIMARY KEY (clientnum, associateid),
  CONSTRAINT ix_tblassociate UNIQUE (clientnum, associatenum)
)
CREATE TABLE tbllocation
(
  clientnum varchar(16) NOT NULL,
  locationid int4 NOT NULL,
  districtid int4 NOT NULL,
  regionid int4 NOT NULL,
  divisionid int4 NOT NULL,
  locationnum varchar(8),
  name varchar(50),
  clientlocnum varchar(50),
  address varchar(100),
  address2 varchar(100),
  city varchar(50),
  state varchar(2) NOT NULL DEFAULT 'zz'::character varying,
  zip varchar(10),
  countryid int4,
  phone varchar(15),
  fax varchar(15),
  payname varchar(40),
  contact char(36),
  active bool NOT NULL DEFAULT true,
  coiprogram text,
  coilimit text,
  coiuser varchar(255),
  coidatetime varchar(32),
  ec_note_field varchar(1050),
  locationtypeid int4,
  open_time timestamp,
  close_time timestamp,
  insurance_loc_id varchar(50),
  lpregionid int4,
  sic int4,
  CONSTRAINT pk_tbllocation PRIMARY KEY (clientnum, locationid),
  CONSTRAINT ix_tbllocation_1 UNIQUE (clientnum, locationnum, name),
  CONSTRAINT ix_tbllocation_unique_number UNIQUE (clientnum, divisionid,
regionid, districtid, locationnum)
)

CREATE TABLE tbljobtitle
(
  clientnum varchar(16) NOT NULL,
  id int4 NOT NULL,
  value varchar(50),
  code varchar(16),
  isdeleted bool DEFAULT false,
  presentationid int4 NOT NULL DEFAULT 1,
  CONSTRAINT pk_tbljobtitle PRIMARY KEY (clientnum, id, presentationid)
)

CREATE OR REPLACE VIEW viwassoclist AS 
 SELECT a.clientnum, a.associateid, a.associatenum, a.lastname, a.firstname,
jt.value AS jobtitle, l.name AS "location", l.locationid AS mainlocationid,
l.divisionid, l.regionid, l.districtid, (a.lastname::text || ', '::text) ||
a.firstname::text AS assocname, a.isactive, a.isdeleted
   FROM tblassociate a
   LEFT JOIN tbljobtitle jt ON a.jobtitleid = jt.id AND jt.clientnum::text =
a.clientnum::text AND 1 = jt.presentationid
   JOIN tbllocation l ON a.locationid = l.locationid AND l.clientnum::text =
a.clientnum::text;

 

-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 01, 2005 12:07 PM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] MSSQL versus Postgres timing

On Tue, Feb 01, 2005 at 11:54:11AM -0500, Joel Fradkin wrote:
> 
> A table with 645,000 records for associates has view (basically select *
> from tblassociates where clientnum = 'test')
> 
> This is taking 13 seconds in postgres and 3 seconds in MSSQL.

Please post the EXPLAIN ANALYZE output for the slow query, once
with enable_seqscan on and once with it off.  For example:

SET enable_seqscan TO on;  -- if not already on
EXPLAIN ANALYZE SELECT * FROM tblassociates WHERE clientnum = 'test';

SET enable_seqscan TO off;
EXPLAIN ANALYZE SELECT * FROM tblassociates WHERE clientnum = 'test';

> Be glad to provide the view and tables etc.

Please do -- it might help us spot something that could be improved.
What version of PostgreSQL are you using?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] MSSQL versus Postgres timing

2005-02-01 Thread Joel Fradkin
I have added indexes for clientnum (and clientnum and unique identifier like
jobtitleid for jobtitle table) to see if it would help sorry about it not
matching. I gave you the definition outlined in PGadmin table window (I can
add the indexes if it will help).

It is still running slower even when I force an indexed scan.

I will look at the other ideas you mentioned as we have added indexes to
another view with the same results (slower then MSSQL)

I did not put in the ::text it did that in PGadmin the original text I ran
to create the view was.

CREATE OR REPLACE VIEW  viwassoclist as 
select a.clientnum, a.associateid, a.associatenum, a.lastname,
a.firstname, jt.value as jobtitle, l.name as location, l.locationid as
mainlocationid, 
  l.divisionid, l.regionid, l.districtid, a.lastname ||
', ' || a.firstname as assocname, a.isactive, a.isdeleted
from tblassociate a left outer join
  tbljobtitle jt on a.jobtitleid = jt.id and
jt.clientnum = a.clientnum   and 1= jt.presentationid inner join
  tbllocation l on a.locationid = l.locationid and
l.clientnum = a.clientnum

;




Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 01, 2005 4:53 PM
To: Joel Fradkin
Cc: 'Michael Fuhr'; pgsql-sql@postgresql.org
Subject: Re: [SQL] MSSQL versus Postgres timing 

"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> "  ->  Sort  (cost=38119.24..38333.26 rows=85611 width=52)
(actual time=20667.645..21031.627 rows=99139 loops=1)"
> "Sort Key: (a.clientnum)::text, a.jobtitleid"
> "->  Seq Scan on tblassociate a  (cost=0.00..31105.34
rows=85611 width=52) (actual time=14.768..16024.395 rows=99139 loops=1)"
> "  Filter: ((clientnum)::text = 'SAKS'::text)"

The sort steps seem slower than they ought to be.  I suspect you ought
to raise sort_mem ... try 10MB instead of 1MB.  Also, if you are running
in a locale other than C and don't have a good reason for doing so, it
would be worth trying C locale instead.

The results with enable_seqscan off also suggest that random_page_cost
may be too high for your environment.

BTW, the schema you posted does not match these plans --- there are
indexes referenced in the plans that do not appear in the schema.

regards, tom lane


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] pg_restore problem

2005-02-02 Thread Joel Fradkin








I used pgadmin to save and mine would not
restore saying something about the encoding.

I will have to be able to save and restore
reliably as well.

 

Also I never heard anything further on the
query running slow (I put up table defs and analyze with and without seq on).

I am running into this on several of my
views (I guess I am not too bright, because I still don’t get why it
chooses seq scan on indexed tables).

I can force it to use index and did see a
little improvement, but the MSSQL was 3 secs and Postgres was like 9.

Seeing as how I got the one viw to return
faster (it was very complex view) on postgres, my guess is I still have stuff
to do. I did try changing the cost to a lower number in config and redid my
analyze, but it was still trying to do a seq scan.

 



Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 



-Original Message-
From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Bradley Miller
Sent: Wednesday, February 02, 2005
3:17 PM
To: Postgres List
Subject: [SQL] pg_restore problem

 

I'm attempting to restore a dump from one server to
another (one is a Mac and one is a Linux base, if that makes any difference). I
keep running into issues like this:

pg_restore: [archiver (db)] could not execute query: ERROR: function public.random_page_link_id_gen()
does not exist

This is what I'm using to restore the files with:

pg_restore -O -x -s -N -d nuvio mac_postgres_2_2_2005_13_24 

Any suggestions on how to get around this problem? It's a huge pain so far just
to sync my two servers up.

Bradley Miller
NUVIO CORPORATION
Phone: 816-444-4422 ext. 6757
Fax: 913-498-1810
http://www.nuvio.com
[EMAIL PROTECTED]








[SQL] problem with backup and restore (probaly stupit newb thing)

2005-02-03 Thread Joel Fradkin
wcreateaudit
pg_restore: creating VIEW viwcustinccube
pg_restore: creating VIEW viwcustinclist
pg_restore: creating VIEW viwdotseal
pg_restore: creating VIEW viwdotsealcube
pg_restore: creating VIEW viweicube
pg_restore: creating VIEW viweilist
pg_restore: creating VIEW viwempinccube
pg_restore: creating VIEW viwempinccubeuserdefinquest
pg_restore: creating VIEW viwempinclist
pg_restore: creating VIEW viwevidencelist
pg_restore: creating VIEW viwexportentitylist
pg_restore: creating VIEW viwexportlist
pg_restore: creating VIEW viwfacmgmt
pg_restore: creating VIEW viwfacmgmtservicecompany
pg_restore: creating VIEW viwgicube
pg_restore: creating VIEW viwgilist
pg_restore: creating VIEW viwgroupauditcube
pg_restore: creating VIEW viwgroupauditlist
pg_restore: creating VIEW viwillnesscount
pg_restore: creating VIEW viwincidentcube
pg_restore: creating VIEW viwincidentlist
pg_restore: creating VIEW viwlocationasis
pg_restore: creating VIEW viwlocationwdivregdis
pg_restore: creating VIEW viwlocxref
pg_restore: WARNING:  column "deletecrossreference" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
pg_restore: creating VIEW viwmerchcube
pg_restore: creating VIEW viwmovementlog
pg_restore: creating VIEW viwpriors
pg_restore: creating VIEW viwproblemcodes
pg_restore: creating VIEW viwrtwcap
pg_restore: creating VIEW viwsafmeet
pg_restore: creating VIEW viwsdotsealcube
pg_restore: creating VIEW viwsflp_casevalue_vs_paid
pg_restore: creating VIEW viwshrink
pg_restore: creating VIEW viwsnapcount
pg_restore: creating VIEW viwsnapmerch
pg_restore: creating VIEW viwsnapshot
pg_restore: creating VIEW viwtendercube
pg_restore: creating VIEW viwtnwincident
pg_restore: creating VIEW viwusma_count
pg_restore: restoring data for table "tbl_i2an_default_values"
pg_restore: restoring data for table "tblaction"
pg_restore: ERROR:  invalid byte sequence for encoding "UNICODE": 0xe9
CONTEXT:  COPY tblaction, line 1799, column value: "Chargé"
pg_restore: [archiver (db)] error returned by PQendcopy
pg_restore: *** aborted because of error
 
Process returned exit code 1.
 
Joel Fradkin
 
 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] problem with backup and restore (probaly stupit newb thing)

2005-02-03 Thread Joel Fradkin
I am not running version 8 (I did try this on the linux box as well, but the
version I documented was the server is linux and the client is XP).

I ran it from PG admin in both cases, maybe I need to just run from the
command line on the linux box (this is fine as long as I can backup the file
and restore it if need be). I did the dump and restore from the PGadminIII
program so maybe the data base is not (UTF-8). I am new to this so I do not
know how to determine and set the types so they match up.
I did a create database and am running a .net app to read the data from
MSSQL and add it to Postgres.

If anyone know a quick to determine this and what the syntax mods are for
the dump and restore I would be very happy, if not I can play around. Thanks
so much for all the help, maybe I should load version 8 (I was not sure
there were rpms for fedora 3 that worked, I tried to load it and was
obviously not doing it correctly).

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 03, 2005 1:17 PM
To: Richard Huxton
Cc: Joel Fradkin; pgsql-sql@postgresql.org
Subject: Re: [SQL] problem with backup and restore (probaly stupit newb
thing) 

Richard Huxton  writes:
> Joel Fradkin wrote:
>> ServerVersion: 07.03.0200 PostgreSQL 7.4.6 on i386-redhat-linux-gnu,
>> compiled by GCC i386-redhat-linux-gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2

> OK - let's start here. The syntax here is fine, I just tested it on an 
> 8.0 database. Are you sure the DB you are restoring to is version 8?

Looks to me like he's trying to use 8.0 pg_dump/pg_restore to reload
into a 7.4 server.  This definitely won't work without specifying
--disable-dollar-quoting to pg_dump; and if you care about restoring
object ownership correctly, also --use-set-session-authorization.
I don't recall if there are any other gotchas.

The unicode issue I'm not sure about.  Perhaps the original database was
SQL_ASCII encoding and so was allowed to contain byte sequences that
aren't legal unicode?

regards, tom lane


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] Assoclist is still slow and I am not seeing it

2005-02-05 Thread Joel Fradkin








Hi,

I asked about this before and am still fighting for success;
any ideas I tried adding a index on jobtitle.

 

Explain analyze returns

 

QUERY PLAN

"Merge Join  (cost=53053.05..53652.66 rows=4888
width=113) (actual time=9788.066..11042.584 rows=85694 loops=1)"

"  Merge Cond: (""outer"".locationid
= ""inner"".locationid)"

"  ->  Sort  (cost=604.65..605.45 rows=319 width=49)
(actual time=5.367..6.729 rows=402 loops=1)"

"    Sort Key: l.locationid"

"    ->  Index Scan using ix_location on tbllocation
l  (cost=0.00..591.38 rows=319 width=49) (actual time=0.193..3.548 rows=402
loops=1)"

"  Index Cond: ('SAKS'::text = (clientnum)::text)"

"  ->  Sort  (cost=52448.40..52710.82 rows=104970
width=75) (actual time=9782.634..10100.572 rows=85695 loops=1)"

"    Sort Key: a.locationid"

"    ->  Merge Right Join 
(cost=39859.63..41130.10 rows=104970 width=75) (actual time=6811.114..8284.253
rows=99139 loops=1)"

"  Merge Cond: (((""outer"".clientnum)::text
= ""inner"".""?column10?"") AND
(""outer"".id = ""inner"".jobtitleid))"

"  ->  Index Scan using ix_tbljobtitle_id
on tbljobtitle jt  (cost=0.00..226.47 rows=6343 width=37) (actual
time=0.089..33.082 rows=5662 loops=1)"

"    Filter: (1 = presentationid)"

"  ->  Sort  (cost=39859.63..40122.06
rows=104970 width=53) (actual time=6763.992..7160.587 rows=99139 loops=1)"

"    Sort Key: (a.clientnum)::text, a.jobtitleid"

"    ->  Seq Scan on tblassociate a 
(cost=0.00..31105.34 rows=104970 width=53) (actual time=0.478..1831.000
rows=99139 loops=1)"

"  Filter: ((clientnum)::text =
'SAKS'::text)"

"Total runtime: 11319.403 ms"

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








[SQL] postgres8 install on fedora core 3 (or redhat4 beta AS)

2005-02-05 Thread Joel Fradkin








Any help?

I got the 8 rpms for fedora and it keeps asking for 7.4 rpm’s,

I down load the specific 7.4 stuff and it says already installed?

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








[SQL] specifying different location for data and wal on fedora core 3 /backup question again

2005-02-05 Thread Joel Fradkin








Being new to Linux I am not 100% sure on a few items.

 

I could find 

% setenv PGDATA2 /home/postgres/data% initlocation $PGDATA2Creating Postgres database system directory /home/postgres/data Creating Postgres database system directory /home/postgres/data/base

 

For defining data in another spot, but was not sure how to
put the environment into the Postgres
superuser's .profile
or .cshrc

 

I have two file systems (on separate raids per a suggestion from a list
member)

They are /pgdata and /wal respectively.

Can anyone point me to a walkthrough how to do this (assuming its
documented somewhere)?

 

I still have had no luck with the backup (I did notice when the
database is created using dbcreate ; logged in as postgres) it had 

CREATE DATABASE wazagua

  WITH OWNER = postgres

   ENCODING = 'SQL_ASCII';

 

The backup seemed to blow up on 

pg_restore: ERROR: 
invalid byte sequence for encoding "UNICODE": 0xe9

 

Do I have to specify ENCODING = 'SQL_ASCII'; some how?

If so can anyone give me the backup and restore syntax.

It gives no errors backing up, just restoring.

 

 

Joel Fradkin



 



 

 








[SQL] Red hat 3 AS when uptodate is it running 2.6 Kernel?

2005-02-07 Thread Joel Fradkin
Any one running Redhat 3 AS?
Is it using the new Kernel when it is up to date?

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL]

2005-02-07 Thread Joel Fradkin
In reading in the manual it seems the new database is based on template0.

I created my first database (wazagua) from the command line using dbcreate
wazagua.
It results in 
CREATE DATABASE wazagua
  WITH OWNER = postgres
   ENCODING = 'SQL_ASCII';

When I created a test data base to load my backup I did this inside pgadmin
III and see it 
CREATE DATABASE test
  WITH OWNER = postgres
   ENCODING = 'UNICODE';

So I think I see why when I try to restore I have issues.
My question is which is the correct choice?
We do have some chars that have French aschii, so is 'SQL_ASCII' or
'UNICODE' the correct choice?
I was able to move all my data over to the 'SQL_ASCII' database, so I may be
again answering my own question.
Values like d`Appréhension show up ok in the table.

Am I guessing correctly to just make the database I am restoring to
'SQL_ASCII'?

I will try this.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL] Did a good restore

2005-02-08 Thread Joel Fradkin








I just wanted to thank those who helped me.

I finally hit it on the head, was the SQL_ASCHII versus UNICODE.

 

I am not sure why I got my original database in SQL_ASCHII,
but when I did my create with it and restored to it it was fine.

I am going to try my create with UNICODE (the default it
seems unless from command line) and run my app to copy my data over to see if I
can then backup and restore ok with UNICODE. I have the feeling I will be
better off using it as we do anticipate foreign language users.

 

 

Joel Fradkin



 





 




 

 








[SQL] 8 rpms for red hat

2005-02-08 Thread Joel Fradkin
I was able to download the ES3 version (we are running AS3).
I could get (without installing 7) the server to install and run.
I had a few of the RPM's complain (the lib one for example).
I was not sure if this was a needed rpm as I could start the server (did not
try to load data, but will do that this evening).
Is there a better way then just grabbing all the rpms over and clicking on
them?
Up2date does a great job on the rpm's from redhat (resolving dependencies
etc, but I am not doing as well manually clicking on them).

Again thanks for all those who helped me resolve this, I think it was just
because we installed 7 with OS the first time, 8 ran fine but some rpm's
failed to load.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] postgres 8 data directory other then default?

2005-02-10 Thread Joel Fradkin








I am trying to use a different drive for the data.

I have edited postgres.conf to point to the new location,
but I get the error /pgdata/PG_VERSION is missing.

Do I have to do a initdb?

When I try an initdb it says it is not empty. (has lost and
found folder in it).

Fedora Core 3 linux.

I was able to do an init db if I create a folder called data
in the pgdata, but I was not sure if it then would use that folder for all the
files (I just want my data to be there).

I also want to move my wal files to the /wal drives once I
get it to start using pgdata drive for data.

 

I got some info on this

0) backup your data

1) stop postmaster and
verify.

2) cd $PGDATA; mv pg_xlog original_pg_xlog;
ln -s /wal pg_xlog; cp 

original_pg_xlog/* pg_xlog

3) start postmaster and
confirm correct startup in log files.

 

Will this work? I am guessing this info would be in the var/lib
folder not the /pgdata folder?

Thanks in advance and sorry for asking so many dumb newb
questions.

Least this time the install of 8 went super smooth, just
went to the postgres site and got the rpm’s (Since I did not install 7
with the OS it asked for the fedora cd’s a few times but all rpm’s
went ok).

 

On the 8.1 files which are in tar files do I need to get
those as well to get to 8.1 if so what is the procedure other then copying the
files to my system (I have the .tar.bz)

 

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








[SQL] postgres 8 data directory other then default?

2005-02-10 Thread Joel Fradkin
I am trying to use a different drive for the data.
I have edited postgres.conf to point to the new location, but I get the
error /pgdata/PG_VERSION is missing.
Do I have to do a initdb?
When I try an initdb it says it is not empty. (has lost and found folder in
it).
Fedora Core 3 linux.
I was able to do an init db if I create a folder called data in the pgdata,
but I was not sure if it then would use that folder for all the files (I
just want my data to be there).
I also want to move my wal files to the /wal drives once I get it to start
using pgdata drive for data.
 
I got some info on this
0) backup your data
1) stop postmaster and verify.
2) cd $PGDATA; mv pg_xlog original_pg_xlog; ln -s /wal pg_xlog; cp 
original_pg_xlog/* pg_xlog
3) start postmaster and confirm correct startup in log files.
 
Will this work? I am guessing this info would be in the var/lib folder not
the /pgdata folder?
Thanks in advance and sorry for asking so many dumb newb questions.
Least this time the install of 8 went super smooth, just went to the
postgres site and got the rpm’s (Since I did not install 7 with the OS it
asked for the fedora cd’s a few times but all rpm’s went ok).
 
On the 8.1 files which are in tar files do I need to get those as well to
get to 8.1 if so what is the procedure other then copying the files to my
system (I have the .tar.bz)
 
 
Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] postgres 8 data directory other then default?

2005-02-11 Thread Joel Fradkin
Thank you for all the help.

I was able to get system up (8.0 RPMS, not a build of 8.0.1)

I installed to the normal /var/lib/pgsql/data directory.
I did an initdb on the pgdata drive initdb /pgdata/data
This created a data folder on that drive set.
I used the instructions below for linking my wal to /wal drive.

I edited the postgres.conf in /var/lib/pgsl/data to use the
data_directory='pgdata/data'

I am hoping this means the log files and such are on /var/lib/pgsql/data and
that the actual data is on the pgdata/data and the wal is on the /wal.

I could confirm the /pgdata and /wal sytems were growing in size as I added
data to the database.

Everyone on this list has been so super helpful; I could never have figured
it out without all the guidance. I just wanted to thank everyone who jumped
in with help. Stuff is so hard when you're ignorant. I know my way around
the MSSQL world, and with the help here I feel I am getting started with
Postgres. Silly stupid stuff like environment variables etc when your new
take so long to figure out the right commands and where to put them, that's
why I really appreciated the step by step help.

I was thinking of documenting the whole process from install to finished
working project. You all think this be a good idea, just post it here?

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 11, 2005 4:22 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] postgres 8 data directory other then default?

Joel Fradkin wrote:
> I am trying to use a different drive for the data.
> 
> I have edited postgres.conf to point to the new location, but I get the
> error /pgdata/PG_VERSION is missing.
> 
> Do I have to do a initdb?
> 
> When I try an initdb it says it is not empty. (has lost and found folder
in
> it).

Create a sub-directory. Separate filesystems will have a lost+found 
folder at the top-level, so you'll need to create a directory called 
pgsql (or whatever).

> Fedora Core 3 linux.
> 
> I was able to do an init db if I create a folder called data in the
pgdata,
> but I was not sure if it then would use that folder for all the files (I
> just want my data to be there).

Ah - there you go then. All files live there unless you use 
tablespaces/symbolic links.

> I also want to move my wal files to the /wal drives once I get it to start
> using pgdata drive for data.
> 
> I got some info on this
> 
> 0) backup your data
> 1) stop postmaster and verify.
> 2) cd $PGDATA; mv pg_xlog original_pg_xlog; ln -s /wal pg_xlog; cp 
> original_pg_xlog/* pg_xlog
> 3) start postmaster and confirm correct startup in log files.
> 
> Will this work? I am guessing this info would be in the var/lib folder not
> the /pgdata folder?

Well, what you might want to do is:
1. Setup your .../pgsql/data folder on the WAL drive.
2. initb that location
3. Set up a tablespace for your data on the other drive(s)

You can then have a default location for your database, and even move a 
single table over to its own drive(s) if that's what you need. Details 
in the manuals (start at chapter 18, then check the details in the SQL 
reference).

I'd test it with a sample database first, make sure you're practiced 
with it. Oh - if you do filesystem backups when the database is stopped, 
make sure you remember to back up the alternate tablespace.

> Thanks in advance and sorry for asking so many dumb newb questions.
> 
> Least this time the install of 8 went super smooth, just went to the
> postgres site and got the rpm's (Since I did not install 7 with the OS it
> asked for the fedora cd's a few times but all rpm's went ok).

Excellent.

> On the 8.1 files which are in tar files do I need to get those as well to
> get to 8.1 if so what is the procedure other then copying the files to my
> system (I have the .tar.bz)

It'll be 8.0.1 rather then 8.1 and they'll be source files. You unpack 
them run configure and make and out pops a new version of PG. I'd 
recommend sticking with the RPMs for the moment.

--
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] trrouble inserting stuff like é

2005-02-18 Thread Joel Fradkin
I wrote a program to read my database (from MSSQL) and write it to Postgres.
It appeared to work great the first time I ran it, but the database was
SQL_ASCII (It defaulted to this when I created it from the command line on
the linux box using createdb.

When I tried to restore it I used PGAdmin on my XP machine which created a
Unicode database.
I was unable to restore the SQL_ASCII to the Unicode, so I re-ran my import
on a Unicode database.
Now I get all kinds of errors from the .net program (which I have write out
the actual SQL on each error).
When I cut and paste the SQL into PGADMIN it runs ok, so it only gives an
error using .net ODBC.

Any one have any clues for me to follow?

Here is an example of a SQL line that did not run in .net.

insert into
tblSuspectedActivity(ClientNum,ID,Value,IsDeleted,PresentationID)
values('FREN',4,'Paiement à 
account',False,2)

Joel Fradkin
 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] RE: [SQL] trrouble inserting stuff like é

2005-02-18 Thread Joel Fradkin
I don’t think it has anything to do with the data base as I can add fine
from pgadmin its an odbc question.

How do I tell the connection to use Unicode?

It worked ok using SQL_ASCHII also with the driver, but I thought if we get
a Chinese client down the road I couldn’t store those chars unless its
Unicode?

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Aarni Ruuhimäki [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 18, 2005 10:25 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] trrouble inserting stuff like é

Hi,

I use LATIN1 encoding and it works fine with accented characters. So try 
creating your db with -E LATIN1 switch. Or even initdb -E LATIN1 if you
wan't 
your dbs default to that.

Best regards,

Aarni


On Friday 18 February 2005 16:59, you wrote:
> I wrote a program to read my database (from MSSQL) and write it to
> Postgres. It appeared to work great the first time I ran it, but the
> database was SQL_ASCII (It defaulted to this when I created it from the
> command line on the linux box using createdb.
>
> When I tried to restore it I used PGAdmin on my XP machine which created a
> Unicode database.
> I was unable to restore the SQL_ASCII to the Unicode, so I re-ran my
import
> on a Unicode database.
> Now I get all kinds of errors from the .net program (which I have write
out
> the actual SQL on each error).
> When I cut and paste the SQL into PGADMIN it runs ok, so it only gives an
> error using .net ODBC.
>
> Any one have any clues for me to follow?
>
> Here is an example of a SQL line that did not run in .net.
>
> insert into
> tblSuspectedActivity(ClientNum,ID,Value,IsDeleted,PresentationID)
> values('FREN',4,'Paiement à
> account',False,2)
>
> Joel Fradkin
>  
>
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly


--
This is a bugfree broadcast to you
from **Kmail**
on **Fedora Core 2** linux system
--


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Comments on subquery performance

2005-02-18 Thread Joel Fradkin
Yea we examined it here as a group as we are facing the same kind of stuff
and found exactly the same thing.
It does what MSSQL called a Cartesian join, and ran no faster other then
removing the outer join logic.

Using a regular join statement and only inner joins was the same speed and I
think little less confusing when you have several joins as our system does.

We may have to look at re-engineering our system as the original design has
dozens of table whith a field set like ID, Value to hold stuff like
apprehension type, jobtitle etc. So we hold a main record with dozens of
id's (and joins when reporting).

Joel Fradkin
 



---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] RE: [SQL] trrouble inserting stuff like é

2005-02-18 Thread Joel Fradkin
Looking deeper into the matter (which I admit I am a bit ignorant on) I
think you hit the nail on the head.
Coming from MSSQL which is using Latin I may have to use Latin1.
It works ok as SQL_ASCHII, but lower does not work, so hopefully using
LATIN1 I can store the chars and also get lower to work etc.

Thanks so much for the help.

I have been very confused as I could create a Unicode db and even save the
French values using pgadmin or .net connection, but the odbc would not work.
I hope it works ok with the LATIN1.


-Original Message-
From: Aarni Ruuhimäki [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 18, 2005 10:25 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] trrouble inserting stuff like é

Hi,

I use LATIN1 encoding and it works fine with accented characters. So try 
creating your db with -E LATIN1 switch. Or even initdb -E LATIN1 if you
wan't 
your dbs default to that.

Best regards,

Aarni


On Friday 18 February 2005 16:59, you wrote:
> I wrote a program to read my database (from MSSQL) and write it to
> Postgres. It appeared to work great the first time I ran it, but the
> database was SQL_ASCII (It defaulted to this when I created it from the
> command line on the linux box using createdb.
>
> When I tried to restore it I used PGAdmin on my XP machine which created a
> Unicode database.
> I was unable to restore the SQL_ASCII to the Unicode, so I re-ran my
import
> on a Unicode database.
> Now I get all kinds of errors from the .net program (which I have write
out
> the actual SQL on each error).
> When I cut and paste the SQL into PGADMIN it runs ok, so it only gives an
> error using .net ODBC.
>
> Any one have any clues for me to follow?
>
> Here is an example of a SQL line that did not run in .net.
>
> insert into
> tblSuspectedActivity(ClientNum,ID,Value,IsDeleted,PresentationID)
> values('FREN',4,'Paiement à
> account',False,2)
>
> Joel Fradkin
>  
>
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly


--
This is a bugfree broadcast to you
from **Kmail**
on **Fedora Core 2** linux system
--


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] RE: [SQL] RE: [SQL] trrouble inserting stuff like é

2005-02-18 Thread Joel Fradkin
I did that and it did not work.
On the .net driver I had to do it on the connect string.

I also just tested latin and see the lower and upper do not work on the
French chars, while upper and lower do work on the French chars on a Unicode
database. So the problem is getting the odbc driver to work with Unicode.
Worst case I will just use SQL_ASCHII, I am guessing the ODBC driver does
not have a encoding command like the .net one does.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: John DeSoi [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 18, 2005 3:42 PM
To: Joel Fradkin
Cc: [EMAIL PROTECTED]; pgsql-sql@postgresql.org
Subject: Re: [SQL] RE: [SQL] trrouble inserting stuff like é


On Feb 18, 2005, at 11:15 AM, Joel Fradkin wrote:

> How do I tell the connection to use Unicode?

Try

SET client_encoding TO 'UNICODE';

http://www.postgresql.org/docs/8.0/interactive/sql-set.html

But it should default to the database encoding, so I'm not sure if that 
is the problem.


John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] problem inserting local characters ...

2005-02-22 Thread Joel Fradkin
I had a similar problem myself and found if I used SQL_ASCHII for the data
base I could insert my data.

I understand however that it basically means the database does not know
anything about encoding and therefore stuff like upper does not work on the
extended chars.

I ended up changing to sql_aschii so both my inserts and backup and restore
work, realizing I am giving up some functionality.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Markus Schaber
Sent: Tuesday, February 22, 2005 8:41 AM
To: Garry
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] problem inserting local characters ...

Hi, Garry,

Garry schrieb:

> Trying to do this insert, I get an error on both the values
> "TelefonGeschäft" and "Firmenstraße":
> 
> FEHLER:  ungültige Byte-Sequenz für Kodierung »UNICODE«: 0xe165

This sounds as your database was created in unicode (utf-8).

> (the 0xe165 differs between the two; the fields in question are regular
> type "text" fields)
> 
> Looks like I'm having some trouble with unicode encoding ... The
> characters I have are regular 8bit ASCII chars ... How can I fix this?

No, 8bit ASCII does not exist. ASCII always is 7 bit. As your error
message is in German, I suspect your data is encoded in LATIN1 or LATIN9
(their only difference is the EUR symbol in the latter one).

Can you try to add the following command before your insert statements:

set client_encoding to latin1;

HTH,
Markus

-- 
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:[EMAIL PROTECTED] | www.logi-track.com



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] Speeds using a transaction vrs not

2005-02-22 Thread Joel Fradkin
I wrote a .net program to move my data to postgres (works great on
SQ_ASCII).

In fiddling around I tried it using the odbc driver and a transaction
originally, but converted it to using the .net connectivity but no
transaction.

What I found was it moved my database (4 gig in MSSQL) in 2 hours using the
.net, but 12 hours with the odbc and transaction.

Have any of you played around to see if using a transaction should be that
much slower or is it the odbc versus .net?

I paid a consultant to look at what would work best and his speed tests
indicated the odbc and the .net were pretty close, so I am assuming it is
because I am using a transaction on my odbc test.
I can run again without it or with it on .net driver, but thought I would
ask. 
We only use transaction on important multiple table updates in our system
now, so should not be a huge thing, but was curious.

Joel Fradkin
 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Speeds using a transaction vrs not

2005-02-24 Thread Joel Fradkin
No I did not do it in on transaction (although in .net I never started or
commited a transaction.

ODBC :
 myTrans = myConnection.BeginTransaction(IsolationLevel.ReadCommitted)
' Assign transaction object for a pending local transaction
 myCommand.Transaction = myTrans
'example of insert
'myCommand.CommandText = "Insert into Region (RegionID, RegionDescription)
VALUES (101, 'Description')"
myCommand.CommandText = insertsqltext
myCommand.ExecuteNonQuery()
myTrans.Commit()

.net driver:
Dim cmd As New NpgsqlCommand(insertsqltext, cnn)
cmd.ExecuteNonQuery()
cmd.Dispose()

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 23, 2005 3:03 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Speeds using a transaction vrs not

Joel Fradkin wrote:
> I wrote a .net program to move my data to postgres (works great on
> SQ_ASCII).
> 
> In fiddling around I tried it using the odbc driver and a transaction
> originally, but converted it to using the .net connectivity but no
> transaction.
> 
> What I found was it moved my database (4 gig in MSSQL) in 2 hours using
the
> .net, but 12 hours with the odbc and transaction.

You *are* using transactions, you don't have a choice. Did you do the 
transfer of all 4GB in ONE transaction with the ODBC? Please describe 
the process in more detail.

--
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Speeds using a transaction vrs not

2005-02-24 Thread Joel Fradkin
Thanks.

I guess I could add that logic, but this is a one time process going to run
the night we go live on postgres.
With .net it took like 2 hours to do the whole shebang, so I am happy.

But I was curious as we will be using odbc for our asp (aprx 90% of our
app).

I could test wrapping the .net in a transaction the way I do in odbc, but
you indicate it is doing that behind the scenes.

So far progress is going really great on our conversion.
I cant wait to see us up on the new platform.

I have not researched the details on doing autovacuum and backup.
I have tested manually backing up and restoring.

We are only like 10% done converting the code (I had hoped it going faster,
but I have not been hands on helping, so maybe it go faster when I get time
to start helping).

I did convert a fairly large app and it seemed to run faster on the postgres
box then the MSSQL box, so my first project after actually getting the data
and views has gone very smooth.

Everyone has been so helpful on the lists and that is the only reason I am
where I am so far so thanks again to everyone who has bothered to answer my
noob questions.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 24, 2005 9:21 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Speeds using a transaction vrs not

Joel Fradkin wrote:
> No I did not do it in on transaction (although in .net I never started or
> commited a transaction.

All inserts/updates/etc take place within a transaction with PostgreSQL. 
Some client libraries autocommit for you - you'll need to read the 
documentation.

> ODBC :
> myCommand.CommandText = insertsqltext
> myCommand.ExecuteNonQuery()
> myTrans.Commit()
> 
> .net driver:
> Dim cmd As New NpgsqlCommand(insertsqltext, cnn)
> cmd.ExecuteNonQuery()
> cmd.Dispose()

Both look to me like they are producing one transaction per insert (the 
slowest possible way to bulk-copy data). That's assuming each block of 
commands is within one loop.

Precisely what is happening will be easiest to see by turning statement 
logging on in your postgresql.conf and comparing two runs. The delay 
might be in overheads of setting up the transaction etc. with the ODBC 
driver.

In any case, if bulk-copying data you'll find a huge improvement 
grouping rows together in batches of 100 - 10,000.

--
   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Postgres 8 - Database access, new install.

2005-02-24 Thread Joel Fradkin








I had a good install, but I did not
install 7.4 when I installed redhat.

It asked for the cd’s a couple times
when I loaded the RPMS 8.0.1 

I also did not use any fire wall or
security (it is in a secure environment).

 

Joel Fradkin

 

 



 









___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

FREEDOM OF INFORMATION ACT 2000: The information contained in this e-mail
may be subject to public disclosure under this Act.  Unless the information
is legally exempt from disclosure, the confidentiality of this e-mail and
your reply cannot be guaranteed.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 


Re: [SQL] How do you compare contents of two tables using 2 pk

2005-02-24 Thread Joel Fradkin
Couldn’t you do a select * from t1 where not in uniquevar (select uniquevar
from t2)?
Or do a join and select on a value in t2 being null.

Joel Fradkin
 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] diference in dates in minutes

2005-02-24 Thread Joel Fradkin
Sorry for being redundant (I asked this a while back).
I was looking for in days before and used date math
Date1::date - date2::date returned the days which was fine for my original
question.
I did get a reply mentioning the age function and he mentioned I could get
minutes etc.
I looked up age in the docs and did not see it.
Anyone have a example of using it to get the minutes between two date
functions, or another method?


Joel Fradkin
 



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] diference in dates in minutes

2005-02-26 Thread Joel Fradkin
You probably want to convert the dates to timestamps, subtract them to
get an interval, extract the epoch to get timme in seconds and then divide
by 60 to get time in minutes.

The converting date to timestamp part isn't trivial. You need to decide
on what you mean when you do this. If you really have timestamps in the
first place, then you can skip the covernsion step.

They are dates and I did find I could do date - date to give me an interval 
date_part('epoch',date-date) returns in secs so /60
This appeared to work ok without converting to time stamps, but maybe I am
missing it if it is not correct as the example I looked at was a large
difference. The app is analyzing Tlogs and the difference should never be
too large, so I will further analyze it with real data.
As always I appreciate the help.
My real question is this an interval then and will it be depreciated soon?
If so what is a better way?

Joel

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] diference in dates in minutes

2005-02-28 Thread Joel Fradkin
Yea I probably forgot respond to all. I agree (specialy for this topic).

In any case, I have dates not time (dates with times).
I did not use datevar::date - date2::date, I did datevar - datevar2 and it
appeared to work.
Since the dates I was comparing were over a year apart the number in secs
was hard to verify. Soon as I get to debuggin the actual app where the time
dif will be a few minutes I will let you know if it worked to do the 
date_part('epoch',date-date) returns in secs so /60.

I appreciate your help and concern, it will be very important to us to
ensure we can do the proper calculations.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: Bruno Wolff III [mailto:[EMAIL PROTECTED] 
Sent: Sunday, February 27, 2005 12:54 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] diference in dates in minutes

Discussions along this line should stay on the list so that other people
can learn from and add comments to the discussion.

On Sat, Feb 26, 2005 at 16:57:15 -0500,
  Joel Fradkin <[EMAIL PROTECTED]> wrote:
> You sure?
> I thought date1::date - date2::date returns an integer of day's diff, but
> date -date returns an interval (least I can do a to_char on it and see
day's
> hours etc that were correct.

Then your "date" column is most likely a timestamp, not a date. That is
what you want anyway if you are trying to get a time difference in minutes.
That wouldn't make much sense for dates.

> Why are they depreciating the ability to look at an interval as a string
> anyhow? Is there an approved method of looking at an interval as a string
> replacing it?

I think because the current version does some odd things and no one has put
together a spec to replace it. You can ge formatted output using EXTRACT
and suitable further manipulation.

> 
> Joel Fradkin
>  
> Wazagua, Inc.
> 2520 Trailmate Dr
> Sarasota, Florida 34243
> Tel.  941-753-7111 ext 305
>  
> [EMAIL PROTECTED]
> www.wazagua.com
> Powered by Wazagua
> Providing you with the latest Web-based technology & advanced tools.
> C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
>  This email message is for the use of the intended recipient(s) and may
> contain confidential and privileged information.  Any unauthorized review,
> use, disclosure or distribution is prohibited.  If you are not the
intended
> recipient, please contact the sender by reply email and delete and destroy
> all copies of the original message, including attachments.
>  
> 
>  
> 
> -Original Message-
> From: Bruno Wolff III [mailto:[EMAIL PROTECTED] 
> Sent: Saturday, February 26, 2005 4:16 PM
> To: Joel Fradkin
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] diference in dates in minutes
> 
> On Sat, Feb 26, 2005 at 15:14:02 -0500,
>   Joel Fradkin <[EMAIL PROTECTED]> wrote:
> > You probably want to convert the dates to timestamps, subtract them to
> > get an interval, extract the epoch to get timme in seconds and then
divide
> > by 60 to get time in minutes.
> > 
> > The converting date to timestamp part isn't trivial. You need to decide
> > on what you mean when you do this. If you really have timestamps in the
> > first place, then you can skip the covernsion step.
> > 
> > They are dates and I did find I could do date - date to give me an
> interval 
> > date_part('epoch',date-date) returns in secs so /60
> 
> date - date won't give you an interval, it will give you an integer of
some
> sort.
> 
> > This appeared to work ok without converting to time stamps, but maybe I
am
> > missing it if it is not correct as the example I looked at was a large
> > difference. The app is analyzing Tlogs and the difference should never
be
> > too large, so I will further analyze it with real data.
> > As always I appreciate the help.
> > My real question is this an interval then and will it be depreciated
soon?
> 
> The Interval type won't be depreciated. Using to_char to convert intervals
> to strings is being depreciated. This won;t cause a problem for extract
> or similar functions.
> 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] diference in dates in minutes

2005-02-28 Thread Joel Fradkin
Sorry you are correct again it is TimeStamp not date.
So maybe that is why it appeared to work ok.
I will do as you suggest and play around with it before I accept it is a
perfect solution, but it appeared to do what I was looking for (figure the
difference in minutes).


Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Bruno Wolff III
Sent: Monday, February 28, 2005 10:25 AM
To: Joel Fradkin
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] diference in dates in minutes

On Mon, Feb 28, 2005 at 09:09:09 -0500,
  Joel Fradkin <[EMAIL PROTECTED]> wrote:
> Yea I probably forgot respond to all. I agree (specialy for this topic).
> 
> In any case, I have dates not time (dates with times).

It really helps if you use precise language when discussing problems.
date, time with time zone, time without time zone, timestamp with time zone,
and timestamp without time zone are all different types.

> I did not use datevar::date - date2::date, I did datevar - datevar2 and it
> appeared to work.

That can not give you a result that is an interval if datevar and datevar2
are actually dates. They must be some other type, probably a timestamp
of some sort.

> Since the dates I was comparing were over a year apart the number in secs
> was hard to verify. Soon as I get to debuggin the actual app where the
time
> dif will be a few minutes I will let you know if it worked to do the 
> date_part('epoch',date-date) returns in secs so /60.

If the date variables are of type timestamp with time zone you should
be OK. You probably want to test comparing dates in different time zones
(if you have different time offsets from GMT at different times of the year
at your locale, e.g. daylight savings vs standard time) to make sure you get
the expected result.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] psql encoding problem

2005-03-02 Thread Joel Fradkin
I had the same issue using odbc, but .net you can use encodeing = unicode,
so not sure what you are using to do the connection.
Since I am using ODBC with my ASP I had to switch from Unicode to SQL_ASCHII
for my data base.
In effect it tells the database you don’t know about the encoding and makes
some of the routines like upper not work properly on extended chars.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of T E Schmitz
Sent: Monday, February 28, 2005 3:48 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] psql encoding problem

Hello,

I am trying to insert the following record:

INSERT INTO item (name,retail_price) VALUES  ('Cheese Soufflé',7.95,);

(I presume you see the accented character in *Soufflé*)

psql comes back with "invalid byte sequence for encoding "UNICODE": 0xe9"
If I do this via DbVisualizer, the record is inserted fine.

Is there any way around this problem?

-- 


Regards/Gruß,

Tarlika Elisabeth Schmitz

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] definative way to place secs from epoc into timestamp

2005-03-04 Thread Joel Fradkin
Just so I don't make a newb mistake I should use timestamptz not timestamp
where the exact moment is important?

My conversion which is not live yet is using timestamp as I did not clearly
understand (but be very easy I hope to modify in my app that creates and
moves the data just use timestamptz instead of timestamp).

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Andrew - Supernews
Sent: Friday, March 04, 2005 2:15 AM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] definative way to place secs from epoc into timestamp

On 2005-03-04, Bret Hughes <[EMAIL PROTECTED]> wrote:
>> Unix epoch times correspond to timestamp _with_ time zone.
>> 
>> (Why are you using timestamp without time zone anyway? For recording the
>> time at which an event occurred that usage is simply wrong - in fact I
>> can't see any situation in which a Unix epoch time can correctly be
>> converted to a timestamp without time zone.)
>
> Valid question.  Because there is no reason to keep up with time zones

It's a common mistake to think that just because you don't need to keep
track of time zones that somehow using timestamp without time zone is
correct. It is _not_. "timestamp with time zone" and "timestamp without
time zone" have _very_ different semantics.

One way to look at it is that "timestamp with time zone" designates a
specific instant in absolute time (past or future). It is therefore the
correct type to use for recording when something happened. In contrast,
"timestamp without time zone" designates a point on the calendar, which
has a different meaning according to where you are, and when. So the
latter type crops up in some cases in calendar applications, and also in
input/output conversions, but it's more often than not the _wrong_ type
to use for storage, since the meaning changes with the timezone (and data
_does_ get moved across timezones, whether due to physical relocation or
other factors).

Unix epoch times have the same semantics as "timestamp with time zone".

> and the fact that  I want the same value from the data base that I put
> into it.

"same" in which sense? The same absolute point in time? Or the same point
on a calendar? Obviously if the timezone doesn't change, then the two are
equivalent; but which one is your application actually looking for? (If
your app is using Unix epoch times, then it's looking only at the absolute
time and not the calendar time...)

Here's an example of how it breaks (using your own conversion functions):

test=> set timezone to 'UTC';
SET
test=> insert into ttst values (int2ts(1109916954));
INSERT 887766166 1
test=> select ts,ts2int(ts) from ttst;
 ts  |   ts2int   
-+
 2005-03-04 06:15:54 | 1109916954
(1 row)

(that is the correct UTC time corresponding to 1109916954)

test=> set timezone to 'America/Denver';
SET
test=> select ts,ts2int(ts) from ttst;
 ts  |   ts2int   
-+
 2005-03-04 06:15:54 | 1109942154
(1 row)

test=> set timezone to 'America/New_York';
SET
test=> select ts,ts2int(ts) from ttst;
 ts  |   ts2int   
-+
 2005-03-04 06:15:54 | 1109934954
(1 row)

Notice the value stored in the DB didn't change, but it suddenly means
something different...

In contrast, if you do the same thing with "timestamp with time zone",
then the Unix time that you get back will _always_ be the same, as you
would expect, regardless of the time zone. Using functions identical to
yours except using "with time zone":

test=> insert into tztst values (int2tsz(1109916954));
INSERT 889130554 1
test=> select ts,ts2int(ts) from tztst;
   ts   |   ts2int   
+
 2005-03-04 06:15:54+00 | 1109916954
(1 row)

test=> set timezone to 'America/New_York';
SET
test=> select ts,ts2int(ts) from tztst;
   ts   |   ts2int   
+
 2005-03-04 01:15:54-05 | 1109916954
(1 row)

test=> set timezone to 'America/Los_Angeles';
SET
test=>

[SQL] delphi access question?

2005-03-30 Thread Joel Fradkin
One of my applications is in Delphi 5.
I just went to change it over to Postgres (from MSSQL).
I am using odbc and something a bit odd is happening.
I can run a sql statement ok, even in sql builder I see all the fields.
But the returned result set appears to be missing some of the fields.
So my table is
CREATE TABLE tbltranslations
(
  transnumber int4 NOT NULL,
  clientnum char(4) NOT NULL,
  lastran timestamp,
  lastupdated timestamp,
  firstrowhasheading char(1),
  fixed_delimited char(1),
  tblname varchar(50),
  delimeter char(1),
  textqualifier char(1),
  active bool,
  direction char(1),
  client_filename varchar(250),
  ftp_account int4,
  fixedlenghthascomma char(1),
  ftp_path varchar(250),
  ftp_filename varchar(50),
  fieldname_forid_on_insert varchar(50)
)
but only fields
  transnumber 
  lastran
  lastupdated
  active
  and ftp_account
show up as fields I can add to the result.
Any ideas?


Joel Fradkin
 
 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] delphi access question?

2005-03-30 Thread Joel Fradkin
Yea odd thing is if I use a table type object it see the fields so it is
only with query objects.

The same odbc works ok with my asp pages.

I will see if I can figure out the zeos stuff.

Something is not translating CHAR values correctly -- all the fields that
you "see" are not char values.

Have no idea off hand *why* this would be ... character encoding differences
maybe ?

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   Joel Fradkin [mailto:[EMAIL PROTECTED]
Sent:   Wed 3/30/2005 12:15 PM
To: pgsql-sql@postgresql.org
Cc: [EMAIL PROTECTED]
Subject:[SQL] delphi access question?
One of my applications is in Delphi 5.
I just went to change it over to Postgres (from MSSQL).
I am using odbc and something a bit odd is happening.
I can run a sql statement ok, even in sql builder I see all the fields.
But the returned result set appears to be missing some of the fields.
So my table is
CREATE TABLE tbltranslations
(
  transnumber int4 NOT NULL,
  clientnum char(4) NOT NULL,
  lastran timestamp,
  lastupdated timestamp,
  firstrowhasheading char(1),
  fixed_delimited char(1),
  tblname varchar(50),
  delimeter char(1),
  textqualifier char(1),
  active bool,
  direction char(1),
  client_filename varchar(250),
  ftp_account int4,
  fixedlenghthascomma char(1),
  ftp_path varchar(250),
  ftp_filename varchar(50),
  fieldname_forid_on_insert varchar(50)
)
but only fields
  transnumber 
  lastran
  lastupdated
  active
  and ftp_account
show up as fields I can add to the result.
Any ideas?


Joel Fradkin
 
 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org

!DSPAM:424b0a12126562811677690!





---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] getting count for a specific querry

2005-04-08 Thread Joel Fradkin








Per a thread a while back the discussion was along the lines
of serving data up to the web quick.

Our app currently pulls a bunch of data to several query
pages.

I have tried and not gotten the queries to return as fast as
they do now which is a huge disappointment as the hardware is twice as powerful
as our current production. I can get it pretty close on some thing but others
are just plain slower.

 

My idea is to use the limit and offset to return just the
first 50 records, if they hit next I can set the offset.

My understanding was this gets slower as you move further
into the data, but we have several options to modify the search, and I do not
believe our clients will page very far intro a dataset.

 

One problem I think I will have though is they currently
have count of the records matching their request and I would like to keep that
as a display field

 

So given a table of associates my default query will be
something like

Select * from tblassoc where clientnum = ‘WAZ’
and isdeleted is false

The user could add stuff like and where first name like ‘Joel’

 

Currently it returns all records with a count and a display
of the records your viewing like 1-50 of 470, next page is 51-100 etc.

Is there a fast way to get the count? Will this concept fly?
Also I am getting heat that my search is now case sensitive. What is the best
way to get a case insensitive search? I could use ~* or perhaps do an UPPER(firstname)
in the select etc? 

 

Thanks for any ideas here. I have tried playing with various
settings and have not seen my times change much, I will persue this on the
performance mailing list.

 

 

 

Joel Fradkin



 




 

 








Re: [SQL] getting count for a specific querry

2005-04-08 Thread Joel Fradkin
Thanks all.
I might have to add a button to do the count on command so they don't get
the hit.
I would want it to return the count of the condition, not the currently
displayed number of rows.

Is there any other database engines that provide better performance?
(We just 2 moths moving to postgres and it is not live yet, but if I am
going to get results back slower then my 2 proc box running MSSQL in 2 gig
and 2 processor I cant see any reason to move to it)
The Postgres is on a 4 proc Dell with 8 gigs of memory.
I thought I could analyze our queries and our config to optimize.


Joel Fradkin
 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] getting count for a specific querry

2005-04-08 Thread Joel Fradkin
Believe me I just spent two months converting our app, I do not wish to give
up on that work. We do a great deal more then count. Specifically many of
our queries run much slower on postgres. As mentioned I purchased a 4 proc
box with 8 gigs of memory for this upgrade (Dell may have been a poor choice
based on comments I have received). Even when I could see a query like
select * from tblassoc where clientnum = 'WAZ' using indexed joins on
location and jobtitle it is still taking 22 seconds to run compared to the 9
seconds on MSSQL on a 2 proc 2 gig box. I got one of my querries to run
faster using a page cost of .2 but then the assoc query was running 50
seconds, so I adjusted to a cost of 2 (tried 1.2, 2, 3, and 4 and did not
see hug changes in the assoc except it did not like .2).

I have placed a call to commandprompt.com and am going to pay for some
support to see if they have anything meaningful to add.

It could be something with my hardware, my hardware config, my postgres
config. I am just not sure. I know I have worked diligently to try to learn
all I can and I used to think I was kinda smart.

I set up the data on 4 10k scsi drives in a powervault and my wal on 2 15k
drives. I am using links to those from the install directory. It starts and
stops ok this way, but maybe it should be different.

I can tell you I am very happy to have this forum as I could not have gotten
to the point I am without the many usefull comments from folks on the list.
I greatly appreciate everyone who has helped. But truth is if I cant get to
work better then I have I may have to ditch the effort and bite the 70K
bullet. Its compounded by using 3 developers time for two months to yield an
answer that my boss may just fire me for. I figured since my first test
showed I could get data faster on the postgres box that I could with enough
study get all our data to go faster, but I am afraid I have not been very
successful.

My failure is not a reflection postgres as you mentioned it is definatley
great at some things. I have 90 some views not to mention as many stored
procedures that have been converted. I wrote an app to move the data and it
works great. But if it too slow I just will not be able to use for
production.

Joel


Judging postgresql on one single data point (count(*) performance) is
quite unfair.  Unless your system only operates on static data and is
used to mostly do things like counting, in which case, why are you using
a database?

PostgreSQL is a great fit for certain loads, and a poor fit for others. 
Are you going to have lots of people updating the database WHILE the
select count(*) queries are running?  Are you going to be doing other,
more interesting things than simply counting?  If so, you really should
build a test case that emulates what you're really going to be doing
with the system.

I've found that the poor performance of aggregates in PostgreSQL is
generally more than made up for by the outstanding behaviour it exhibits
when under heavy parallel load.  

Note that the basic design of PostgreSQL's MVCC system is such that
without using some kind of trigger to maintain pre-calculated aggregate
information, it will NEVER be as fast as most other databases at doing
aggregates across large chunks of your data.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] getting count for a specific querry

2005-04-08 Thread Joel Fradkin
As always thanks Tom,

I will definitely look at what I can do.
Since it is a count of matched condition records I may not have a way
around.
I don't think my clients would like me to aprox as it is a count of their
records. What I plan on doing assuming I can get all my other problems fixed
(as mentioned I am going to try and get paid help to see if I goofed it up
some where) is make the count a button, so they don't wait everytime, but
can choose to wait if need be, maybe I can store the last count with a count
on day for the generic search it defaults to, and just have them do a count
on demand if they have a specific query. Our screens have several criteria
fields in each application.

Joel Fradkin
 


-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 08, 2005 2:28 PM
To: Bob Henkel
Cc: Scott Marlowe; Joel Fradkin; Andrew Sullivan; pgsql-sql@postgresql.org
Subject: Re: [SQL] getting count for a specific querry 

Bob Henkel <[EMAIL PROTECTED]> writes:
> From a simple/high level perspective why is this? That is why can't
> PostgreSQL do aggregates as well across large chunks of data. I'm
> assuming it extremely complicated. Otherwise the folks around here
> would have churned out a fix in a month or less and made this issue a
> past story.

You can find very detailed discussions of this in the archives, but
the basic reason is that we have a very general/extensible view of
aggregates (which is how come we can support custom aggregates).
An aggregate is a function that you feed all the input rows to, one
at a time, and then it produces the answer.  Nice, general, extensible,
and not at all optimizable :-(

Now in general that is the only way to do it, and so Scott's implication
that we always suck compared to other databases is really an
overstatement.  Ask another database to do a standard deviation
calculation, for instance, and it'll be just as slow.  However there are
special cases that other DBs can optimize that we don't even try to.
The big ones are:

* COUNT(*) across a whole table --- most non-MVCC databases keep tabs of
the physical number of the rows in the table, and so they can answer
this very quickly.  Postgres doesn't keep such a count, and under MVCC
rules it wouldn't necessarily be the right answer if we had it.
(BTW, count of rows satisfying a particular condition is a different
ballgame entirely; in most cases that can't be optimized at all, AFAIK.)
If you are willing to accept approximate answers there are various
tricks you can use --- see the archives --- but we don't get to fudge
on COUNT(*) itself because it's in the SQL standard.

* MIN or MAX of an indexed column --- most DBs can use an index scan to
find such a row relatively quickly, although whether this trick works or
not depends a whole lot on whether you have WHERE or GROUP BY and just
what those conditions look like.

You can fake the min/max answer in Postgres by doing the transformstion
to an indexable query by hand, for instance instead of MAX(col) do
SELECT col FROM tab ORDER BY col DESC LIMIT 1;

There are periodic discussions in the hackers list about teaching the
planner to do that automatically, and it will probably happen someday;
but it's a complicated task and not exceedingly high on the priority list.

regards, tom lane


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] getting count for a specific querry

2005-04-08 Thread Joel Fradkin








I have asked specific questions and paid
attention to the various threads on configuration.

I will take my config files and post on
the performance thread that is a good suggestion (personnaly I have more faith
in this forum then a paid consultant, but at this point I am willing to try
both).

 

Thanks again.

 

The count thing I can get around using
stored results and on demand counting, but some of my statistical reporting is
just a must have.

I enclosed one of my views, I realize to
get help I should also include tables and indexes etc, and maybe I will do that.

It is just there are so many of them. This
one in particular did not run at all when I first got my data loaded.

I ended up adding a few indexes and not
sure what else and got it to run faster on postgres.

Now it is running horrid, so I am back to the
drawing board I change one thing and something else breaks.

I am just frustrated, maybe Monday I will
have better strength to figure it all out.

 



Joel Fradkin



 



 








CREATE OR REPLACE VIEW  viwcasecube
as
/*customer 1*/
select c.clientnum,c.casenum,c.casereferencenum, coalesce
((select sum(quantity * amount) 
   as merchandiseamount
   from tblmerchandise m
   where  m.caseid = c.caseid and m.clientnum=c.clientnum), 0) 
|| coalesce(c.totaladmitted, 0) as casevalue, 
coalesce(c.totaladmitted, 0) as admitted, 
coalesce(c.totalrecovery, 0) as recovered, 
coalesce(c.ageatcase, 0) as ageatcase,
1 as numberofcase, coalesce(ct.value,'na') as casetype, 
s.value as status, c.opendate,  c.closedate, 
a.value as action, u2.completename as closebyuser, 
cs.value as casesource, m.value as method, m2.value as method2, 
c.reportingagentfirstinitial, c.reportingagentlastname, 
case when c.monthsemployedatcase is null then 'na'
else cast(c.monthsemployedatcase as varchar(3))
end  as monthsemployedatcase,
u1.completename as createdby, st.value as subjecttype,
ot.value as offensetype, 
 /*cust*/  custpt.value as patrontype, 
'na'  as jobtitle,
0 as testscore,
coalesce(cust.firstname,'na') as firstname,
coalesce(cust.lastname,'na') as lastname,
coalesce(cust.address,'na') as address,
coalesce(cust.city,'na') as city,
coalesce(cust.state,'na') as state,
coalesce(cust.zip,'na') as zip,

coalesce(crtt.value,'na') as restitutiontype,
/*
type of restitution tracking
*/

coalesce(tblsex.value,'na') as gender,
coalesce(eth.value,'na') as ethnicity,
custmbt.value as militarybranch,
custmst.value as militarystatus,
coalesce(secagentnum,'not recorded') as secagentnum,

l.locationnum, l.name as store,

coalesce(l.address,'na') as locationaddress,
coalesce(l.city,'na') as locationcity,
coalesce(l.state,'na') as locationstate,
coalesce(l.zip,'na') as locationzip,

d .districtnum, 
d .districtname as district, r.regionnum, 
r.regionname as region, dv.divisionnum, 
dv.divisionname as division, 
case when c.apprehdate is null then c.opendate
else c.apprehdate
end  as apprehdate,
 to_char( coalesce(c.apprehdate,c.opendate),'') as year,

  to_char( coalesce(c.apprehdate, c.opendate),'q') as 
quarter,

  to_char( coalesce(c.apprehdate, c.opendate),'MM') as 
month,

  to_char( coalesce(c.apprehdate, c.opendate),'D') as 
weekday,

  to_char( coalesce(c.apprehdate, c.opendate),'WW') as week,

  to_char( coalesce(c.apprehdate, c.opendate),'HH24:MI') as 
time,

coalesce(c.sourcedocnum,'none') as sourcedocnum,
   case coalesce(c.sourcemodule,'n') 
 when 'n' then 'none'
 when 'i' then 'incident'
 when 'g' then 'general investigation'
 when 'e' then 'employee investigation'
   else 'none'
   end as sourcemodule,
case coalesce(tblcase1.clientnum, 'no') || coalesce(cdacase.clicasenumber, 
'no')
when 'nono' then 'no' else 'yes' end as civilcase,
coalesce(lpr.lpregionnum,'na')as lpregionnum,coalesce(lpr.managername,'na') 
as lpmanager
from tblcase c left outer join
tblaction a on c.actionid = a.id and c.clientnum = a.clientnum   and 1= 
a.presentationid left outer join 
tblmethod m on c.methodid = m.id and c.clientnum = m.clientnum and 1= 
m.presentationid  left outer join
tblmethod m2 on c.methodid2 = m2.id  and c.clientnum = m2.clientnum and 1= 
m2.presentationid left outer join
tblcasesource cs on  c.casesourceid = cs.id  and c.clientnum = cs.clientnum 
 and 1= cs.presentationid
inner join
tbll

Re: [SQL] getting count for a specific querry

2005-04-08 Thread Joel Fradkin
I will also look at doing it the way you describe, they do have wide
liberty. Thanks so much for the ideas. Sorry I did not do a perusal of the
archives first (I normally try that, but think I am brain dead today).

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Mischa Sandberg
Sent: Friday, April 08, 2005 2:40 PM
To: Scott Marlowe
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] getting count for a specific querry

Quoting Scott Marlowe <[EMAIL PROTECTED]>:

> On Fri, 2005-04-08 at 12:08, Joel Fradkin wrote:
> > I might have to add a button to do the count on command so they don't
get
> > the hit.
> > I would want it to return the count of the condition, not the currently
> > displayed number of rows.
> 
> Judging postgresql on one single data point (count(*) performance) is
> quite unfair.  Unless your system only operates on static data and is
> used to mostly do things like counting, in which case, why are you using
> a database?

For the general discussion of slowness of count(*),
and given no entry on the subject in 
   http://www.postgresql.org/docs/faqs.FAQ.html
... I guess everyone has to be pointed at:
 http://archives.postgresql.org/pgsql-hackers/2005-01/msg00247.php

However, the gist of this person's problem is that an adhoc query,
NOT just a 'select count(*) from table', can take remarkably long.
Again, the problem is that PG can't just scan an index.
--
One workaround for this is to use EXPLAIN.
THIS APPLIES TO CASES WHERE THE USER HAS WIDE LIBERTY IN QUERIES.
It's pointless overhead, otherwise.

default_statistics_target is cranked up to 200 on all such tables,
and pg_autovacuum is running. (If there were anything to improve,
it would be refining the thresholds on this).

If the "(cost...rows=" string returns a number higher than the 
QUERY row limit, the user is derailed ("That's not specific enough to answer
immediately; do you want an emailed report?").

Otherwise, it runs EXPLAIN ANALYZE, which is still faster than the query
itself.
If the "(actual...rows=...)" is higher than the RESULT row limit (PAGE
limit).

It then runs the query, with the PAGE rows offset and limit --- and happily,
practically everything that query needs is now in shared_buffers.
The count from the EXPLAIN analyze is displayed in the web page.

-- 
"Dreams come true, not free." -- S.Sondheim, ITW


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] getting count for a specific querry

2005-04-08 Thread Joel Fradkin
I turned off hyperthreading (I saw that on the list that it did not  help on
Linux).

I am using a pretty lightweight windows box Optiplex with IDE 750-meg
internal 2.4 mghz cpu.

My desktop has 2 gig, so might not be bad idea to try it local (I have
installed), but me thinks its not totally a hardware issue for us.

Joel Fradkin
 

Do you run your 2650s with hyperthreading on?  I found that slowed mine
down under load, but we never had more than a couple dozen users hitting
the db at once, so we may well have had a different load profile than
what you're seeing.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] trying to do an update a bit confused.

2005-04-19 Thread Joel Fradkin








update tblcase  set merchandisetotal = 

(

COALESCE(( SELECT sum(m.quantity::numeric * m.amount) AS merchandiseamount

   FROM tblmerchandise m

  WHERE m.caseid = tblcase.caseid AND m.clientnum::text
= tblcase.clientnum::text), 0.0)

)

 

I tried running the above and it wants to do a sum on all
the records in tblcase instead of just the one being updated, what am I doing
wrong?

 

Joel Fradkin



 



 

 








Re: [SQL] trying to do an update a bit confused.

2005-04-19 Thread Joel Fradkin









I am not updating 1 record.

I have : WHERE m.caseid
= tblcase.caseid AND m.clientnum::text = tblcase.clientnum::text)

Which should do the aggregate on the
record that is being updated (least as I understood it).

It should update all record in case with
either 0 if there are no merchandise records or do a sum of the merch records
for the case being updated.

 

Joel

 

I also never heard back on the merge join
issue (why do I need to specify them off on one machine and it runs faster on
the other).

I am guessing it is memory related and config
related, but I am about to give up on postgres as I am just not getting all my
views to run fast enough.

You guys could say my views are bad SQL
design etc, but they run fine in MSSQL. I don’t mind visiting each one to
make them better, but I am just not able to increase the speed on all of them.
The last one about assoc finally did run in 3 secs with merge joins off which
is pretty fast, but now I have others that seem pretty simple to me and yet run
very slow. 

 

 



You're most probably
missing a Where clause after the parentensis. see:





-Original
Message-
From: Joel Fradkin
[mailto:[EMAIL PROTECTED]
Sent: Dienstag, 19. April 2005
16:06
To: pgsql-sql@postgresql.org
Subject: [SQL] trying to do an
update a bit confused.

update tblcase  set
merchandisetotal = 

(

COALESCE(( SELECT sum(m.quantity::numeric
* m.amount) AS merchandiseamount

  
FROM tblmerchandise m

 
WHERE m.caseid = tblcase.caseid AND m.clientnum::text =
tblcase.clientnum::text), 0.0)

) 

WHERE ;

 

 

I tried running the above and it
wants to do a sum on all the records in tblcase instead of just the one being
updated, what am I doing wrong?

 

Joel Fradkin



 



 

 










[SQL] odd error

2005-05-02 Thread Joel Fradkin








I am getting an odd error.

Same data loaded on windows does not give it and even going
from a client over VLAN does not give it, so I am guessing it is based on
timing or something.

Microsoft
Cursor Engine error '80004005' 

Data
provider or other service returned an E_FAIL status. 

/app/searchlist.asp,
line 1113 

 

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








Re: [SQL] Function or Field?

2005-05-02 Thread Joel Fradkin
You could also make a table with just that data in it so you don't have the
field in all the records and you don't have to check all the records to see
what is next.

I am assuming this is some kind of a flag values used in a batch, if you
just need the last id I use max(id)+1.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of [EMAIL PROTECTED]
Sent: Monday, May 02, 2005 3:17 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] Function or Field?

Hi.
What is the better way to store the last record for a translation???
I.E:
 The data for the last product vendding.
 What is better:
 a) Create a field in "product" table and create a Trigger (before insert or
update into vendding table) to alter this field.
 b) Create a view or function that check the all venddings (in vendding
table)
for the specified product and return the last vendding information?

 a)
 CREATE TABLE products(
  id serial primary key,
  description varchar(50),
  last_vendding date()--Is correct to use this field???
 );
 CREATE TABLE vendding(
  id serial primary key,
  date_ date,
  product integer references (products)
 );
 CREATE TRIGGER TG_change_products_last_vendding_field on table vendding
BEFORE
INSERT OR UPDATE FOR EACH ROW EXECUTE procedure
change_products_last_vendding();

 b)
  CREATE TABLE products (
   id serial primary key,
   description varchar(50)
  );
  CREATE TABLE vendding(
   id serial primary key,
   date_ date,
   product integer references (products)
  );
  CREATE VIEW last_product_change as SELECT * from vendding order by date_
desc
limit 1; --Okay, this view will return the last record and not the last
record
for a product... but its a example.

I am asking it becouse I have used CLIPPER(dbase) for my old programs and in
DBASE the view/check function that will check for each select is not
functional. And I need to create a field in all table references, but in
DBASE
this fields allways broken and I need to recheck it.

Thank you.

---(end of broadcast)---
TIP 8: explain analyze is your friend


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] encoding

2005-05-06 Thread Joel Fradkin








I am not clear on what makes this work?

 

I am going live Sunday and have thus far been using SQL_ASCHII.

I still have the feeling I should be using something else,
but when I used Unicode my conversion from MSSQL blew up on encoding error for
a char that wasn’t plain ASCHII(IE French or Spanish etc.).

The conversion program ran fine when I chose SQL_ASCHII. 

The data is on a windows 2k MSSQL database.

The conversion program is a .net app that reads from MSSQL
win2k data base and writes to postgtres 8.0.2 on Redhat AS4.

I tried to get this to work a while back and found if I set
the encoding explicitly on the connect in the .net connector it did not give me
an error, but I still have many asp pages and I could not get the encoding set
using the ODBC (which is what the asp pages use).

I believe I tried setting it with a sql statement, but that
did not seem to help either.

I can move forward using SQL_ASCHII, but would prefer to
have encoding set properly, but I ma not sure what I am missing.

 

Any ideas would be much appreciated.

 

Joel Fradkin



 




 

 








[SQL] getting duplicate number is there a

2005-05-16 Thread Joel Fradkin








I was using SET TRANSACTION
ISOLATION LEVEL SERIALIZABLE in MSSQL.

 

Is there something similar
in postgres to ensure its not in the middle of being updated?

 

    sql
= "SELECT COUNT(*) FROM tblcase WHERE LocationID = " & intLocationID
& _

  "
and substr(casenum,length(casenum)-1,2) = '" & right(year(date),2)
& "' AND clientnum = '" & _

  chrClientNum
& "'"

 

I will add a select just before doing the insert to see if
this helps, its not happening a bunch, but 5 6 times a day is still an issue
for me.

I use the count as a segment of my case number so each time
a new case is entered the count goes up for that location for that year.

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








Re: [SQL] getting duplicate number is there a

2005-05-17 Thread Joel Fradkin
I actually had the same thought (a counter table, I might be able to add
fields to the location table, but we have several applications case is just
an example). I agree that is probably the safest way and it also fixes
another issue I have been having when a user wants to transfer a case to
another location.

I appreciate the ideas, I could probably safely lock the numbering table as
I would be afraid of locking the case table.

Joel Fradkin
 

I'd be tempted to have a case_numbers table with (year,location,max_num) 
and lock/read/insert to that. Makes everything explicit, and means you 
don't have to mess around with counts/substrings.

--

   Richard Huxton
   Archonet Ltd


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] encoding question

2005-06-22 Thread Joel Fradkin








Hi,

 

I am trying to convert from SQL_ASCII to UNICODE.

 

I have a program that will read from a table in one database
and write to a table in a different database.

I am hoping this all I need do (One data base is SQL_ASCII and 
the other is UNICODE).

I get a byte sequence error writing.

I tried using encoding =UNICODE and did not get the error,
but the data looked different stored in the field, so I am guessing it messed
it up.

I am using NpgsqlConnection.

Any one know the proper way to convert using a .net
app?

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








[SQL] dates and selection

2005-08-18 Thread Joel Fradkin








After my conversion to Unicode and implementing new drivers
(Thank god still up no down time J) I do have an
issue with some of my sql selects concerning dates.

 

I know the long answer, but am asking if there is a short
one.

 

I have in some of my slq :

(to_char(e.incidentdate, 'Mon DD '::text) || ' '::text)
|| e.incidenttime::text  as incidentdate

 

I used to be able to sort and select by incident date and it
was working ok (I think).

 

Now I found I had to do something like this just to have a
timestamp (problem is I do not want the format of the time stamp, my clients
want to see the month as a string)

((to_char(e.incidentdate, 'Mon DD '::text) || ' '::text)
|| e.incidenttime::text)::timestamp  as datetoselectby

 

Is there any way to reference the text type variable as a
date selection? (was this ever working or was I hallucinating).

 

Many thanks for all the help.

 

Joel Fradkin



 



 








[SQL] nevermind answered my own question by looking at my question what a DOH!

2005-08-18 Thread Joel Fradkin








select * from viwEmpIncCube where clientnum ='MSI' and Incidentdate::timestamp
between '01/01/2005' and '08/18/2005 23:59'

woks fine.

 

Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 

 








Re: [SQL] SQLException - SET AUTOCOMMIT TO OFF is no longer supported

2005-08-24 Thread Joel Fradkin








I am getting ready to start using jboss
with postgres (newer to jboss then postgres).

Is there a mailing list for java postgres
support?

Can you post the connection elements for
JBOSS or is that something documented in the JDBC driver?

 



Joel Fradkin



 



Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305



 



[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



 




 



-Original Message-
From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dinesh Pandey
Sent: Wednesday, August 24, 2005
7:22 AM
To: 'PostgreSQL'
Subject: Re: [SQL] SQLException -
SET AUTOCOMMIT TO OFF is no longer supported

 

After
using correct version of JDBC driver I am able to start JBOSS server.



 

Thanks
Dinesh Pandey













 

I have created datasource but
getting this error on JBOSS startup.

 

Using:
PostgreSQL 8.0

JBOSS:
JBOSS-3.2.6

 

16:09:37,093 WARN 
[TransactionImpl] XAException: tx=TransactionImpl:XidImpl [FormatId=257,
GlobalId=dinesh//1, BranchQual=] errorCode=XA_UNKNOWN(0)

org.jboss.resource.connectionmanager.JBossLocalXAException:
Error trying to start local tx: ; - nested throwable:
(org.jboss.resource.JBossResourceException: 

SQLException; - nested throwable: (java.sql.SQLException:
ERROR:  SET AUTOCOMMIT TO OFF is no longer supported

))

   
at org.jboss.resource.connectionmanager.TxConnectionManager$LocalXAResource.start(TxConnectionManager.java:654)

   
at org.jboss.tm.TransactionImpl.startResource(TransactionImpl.java:1196)

 

   
at org.jboss.tm.TransactionImpl.enlistResource(TransactionImpl.java:649)

 

Thanks
Dinesh Pandey

 








[SQL] High level discussion for design of using ACL to retrieve Data

2005-11-10 Thread Joel Fradkin








We currently use a system with 4 tables (division, region,
district, location).

The data for the most part has a field named location id.

The users get a level and location id (if they are district
it would represent a district id etc).

 

I have been asked to make this more flexible, for example if
the user needs data for two locations but they are not in the same district.

 

One thing I have had to add was the ability to have a user
have access to a location and then also have access to a related location (main
and auto center have different location number, but if the user has access to
the main location he can also see auto center data).

 

I did this with a xref table and a union, but it seem pretty
slow.

So adding even more flexibity like multiple districts, locs
etc (was thinking of trying to do some kind of grouping that would encompass
our current plan) has been a problem I have thought about a lot, but I have not
figured out a way that will give fast access.

I could do groups of access rights and do unions with
distinct to get data, but I fear that would be really slow.

 

Any one have ideas on this subject?

 

Thanks in advance.

 

Joel Fradkin



 




 

 








Re: [SQL] High level discussion for design of using ACL to retrieve Data

2005-11-11 Thread Joel Fradkin








I had an offlist inquiry for more details
so I thought I would post my response.

This is a select using the union for auto xref.

 

SELECT distinct * FROM (

 select * from viwcaselist where clientnum = 'SEA' AND DivisionID
= 100 And isdeleted=false 

union all 

select * from viwcaselist where clientnum = 'SEA' And isdeleted=false
and MainLocationID IN (SELECT AutoLocationID FROM tblSearsAutoXref WHERE (LocationID
in ( select MainLocationID from viwLocationWDivRegDis where clientnum = 'SEA'
AND DivisionID = 100)))

) DERIVEDTBL Order BY OpenDate Desc,CaseNum limit 51

 

Normaly it is more like.

 

select * from viwcaselist where clientnum = 'WAZ' AND DivisionID = 100
And isdeleted=false Order BY OpenDate Desc,CaseNum limit 51

 

We have location,district, region, and division tables.

AND DivisionID = 100 is the key for getting the records (this changes
based on the user level can be district, region, division, or location, or all
records fro company level).

 

What I would like to see is the ability to have more types of access to
the data then one level, but as you can see from the xref I implemented it runs
super slow compared to just a straight select. I made a view  viwLocationWDivRegDis
that has the four tables (div,reg,dist,loc) tied together with joins.

 

Our location table :

CREATE TABLE tbllocation

(

  clientnum varchar(16) NOT NULL,

  locationid int4 NOT NULL,

  districtid int4 NOT NULL,

  regionid int4 NOT NULL,

  divisionid int4 NOT NULL,

  locationnum varchar(8),

  name varchar(50),

  clientlocnum varchar(50),

  address varchar(100),

  address2 varchar(100),

  city varchar(50),

  state varchar(2) NOT NULL DEFAULT 'zz'::character
varying,

  zip varchar(10),

  countryid int4,

  phone varchar(15),

  fax varchar(15),

  payname varchar(40),

  contact char(36),

  active bool NOT NULL DEFAULT true,

  coiprogram text,

  coilimit text,

  coiuser varchar(255),

  coidatetime varchar(32),

  ec_note_field varchar(1050),

  locationtypeid int4,

  open_time timestamp,

  close_time timestamp,

  insurance_loc_id varchar(50),

  lpregionid int4,

  sic int4,

  exportentity varchar(16),

  CONSTRAINT pk_tbllocation PRIMARY
KEY (clientnum, locationid),

  CONSTRAINT ix_tbllocation_1 UNIQUE
(clientnum, locationnum, name),

  CONSTRAINT ix_tbllocation_unique_number
UNIQUE (clientnum, divisionid, regionid, districtid, locationnum)

) 

WITH OIDS;

 

 

CREATE OR REPLACE VIEW viwlocationwdivregdis
AS 

 SELECT tbllocation.clientnum, tbllocation.locationid,
tbllocation.locationnum, tbllocation.name, tbldivision.divisionnum, tbldivision.divisionname,
tblregion.regionnum, tblregion.regionname, tbldistrict.districtnum, tbldistrict.districtname,
tbllocation.locationid AS mainlocationid, tbllocation.divisionid, tbllocation.regionid,
tbllocation.districtid, tbllocation.phone, tbllocation.contact, tbllocation.active,
tbldistrict.active AS distactive, tblregion.active AS regactive, tbldivision.active
AS divactive

   FROM tbllocation

   JOIN tbldivision ON tbllocation.clientnum::text
= tbldivision.clientnum::text AND tbllocation.divisionid = tbldivision.divisionid

   JOIN tblregion ON tbllocation.clientnum::text
= tblregion.clientnum::text AND tbllocation.regionid = tblregion.regionid AND tbllocation.divisionid
= tblregion.divisionid

   JOIN tbldistrict ON tbllocation.clientnum::text
= tbldistrict.clientnum::text AND tbllocation.regionid = tbldistrict.regionid
AND tbllocation.divisionid = tbldistrict.divisionid AND tbllocation.districtid
= tbldistrict.districtid;

 

Any ideas would be great!

 

 

 



Joel Fradkin



 



 



-Original Message-
From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joel Fradkin
Sent: Thursday, November 10, 2005
3:19 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] High level
discussion for design of using ACL to retrieve Data

 

We currently use a system with 4
tables (division, region, district, location).

The data for the most part has a
field named location id.

The users get a level and location
id (if they are district it would represent a district id etc).

 

I have been asked to make this more
flexible, for example if the user needs data for two locations but they are not
in the same district.

 

One thing I have had to add was the
ability to have a user have access to a location and then also have access to a
related location (main and auto center have different location number, but if
the user has access to the main location he can also see auto center data).

 

I did this with a xref table and a
union, but it seem pretty slow.

So adding even more flexibity like
multiple districts, locs etc (was thinking of trying to do some kind of
grouping that would encompass our current plan) has been a problem I have
thought about a lot, but I have not figured out a way that will give fast
access.

I could do groups of access rights
and do unions with distinct to get d

[SQL] recursive query

2007-09-05 Thread Joel Fradkin
Hi,

I am trying to do a hierarchy design. We currently have it hard coded to 4
levels and the location table has an id field for each level where the data
has a location id.

This allows easy access by level (to restrict who see what) and easy to
arrogate for reporting by division, region, district, and location.

I am trying to make it more flexible some companies don't use 4 levels some
want more, some have different sets of users and would like their own
hierarchy.

My idea is to have a virtual hierarchy (all share the location record which
is the lowest level and a separate file that will be linked with a table
that has the locationid and the hierarchy id to enable multiple hierarchy).

 I did a bit of research and found some code to do recursive sql (but it was
IBM and I am not sure I even got it right for postgres).

 

create or replace temp view test (clientnum,id ,parentid,descr, level) as (

select   h.clientnum,h.id ,h.parentid,h.descr,0 as level

 from tblhyerarchy h where parentid =0 

union all

select   h1.clientnum,h1.id ,h1.parentid,h1.descr,h.level +1 as level

 from tblhyerarchy h1

inner join test h on h1.parentid =h.id 

where h.level < 5

);

select * from test

 

but get a 

ERROR: infinite recursion detected in rules for relation "test" SQL state:
42P17

 

I am basing on a table created with

CREATE TABLE tblhyerarchy

(

  clientnum character varying(16) NOT NULL,

  id integer NOT NULL,

  parentid integer NOT NULL,

  descr character varying(250),

  CONSTRAINT pk_tblhyerarchy PRIMARY KEY (clientnum, id)

) 

WITH OIDS;

 

 

Any help is much appreciated. I am also a little confused how to do a
crosstab on the output.

In the end I will want to get at a data record that has the location id and
then link to the hierarchy with each level of the hierarchy present for
aggregation etc.

 

Joel Fradkin

 

Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305

 

 <mailto:[EMAIL PROTECTED]> [EMAIL PROTECTED]
 <http://www.wazagua.com/> www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.