Re: [SQL] "CASE" is not a variable

2006-06-29 Thread Patrick Jacquot

Keith Worthington wrote:


Hi All,

The following is a section of code inside an SQL function.   When I attempt to
run it I get the error message '"CASE" is not a variable'.  If I split this into
two queries (one for each variable) it works fine.  Obviously I have a work
around but I would like to understand what I am doing wrong.  TIA

SELECT tbl_item_bom.so_subline INTO v_so_subline,
  CASE WHEN tbl_mesh.mesh_type = 'square' THEN
( CASE WHEN tbl_mesh.unit_of_measure = 'in' THEN  
tbl_mesh.mesh_size

   WHEN tbl_mesh.unit_of_measure = 'ft' THEN 12.0*
tbl_mesh.mesh_size
   WHEN tbl_mesh.unit_of_measure = 'mm' THEN 25.4*
tbl_mesh.mesh_size
   WHEN tbl_mesh.unit_of_measure = 'cm' THEN  2.54   *
tbl_mesh.mesh_size
   WHEN tbl_mesh.unit_of_measure = 'm'  THEN  0.0254 *
tbl_mesh.mesh_size
   ELSE 0
  END
)
  WHEN tbl_mesh.mesh_type = 'diamond' THEN
( CASE WHEN tbl_mesh.unit_of_measure = 'in' THEN  
tbl_mesh.mesh_size / 2.0

   WHEN tbl_mesh.unit_of_measure = 'ft' THEN 12.0*
tbl_mesh.mesh_size / 2.0
   WHEN tbl_mesh.unit_of_measure = 'mm' THEN 25.4*
tbl_mesh.mesh_size / 2.0
   WHEN tbl_mesh.unit_of_measure = 'cm' THEN  2.54   *
tbl_mesh.mesh_size / 2.0
   WHEN tbl_mesh.unit_of_measure = 'm'  THEN  0.0254 *
tbl_mesh.mesh_size / 2.0
   ELSE 0
  END
)
  ELSE 0
  END INTO v_mesh_size
 FROM sales_order.tbl_item_bom
 LEFT JOIN peachtree.tbl_mesh
   ON tbl_item_bom.item_id = tbl_mesh.item_id
WHERE tbl_item_bom.so_number = rcrd_line.so_number
  AND tbl_item_bom.so_line = rcrd_line.so_line
  AND tbl_item_bom.component_type = 'net';

Kind Regards,
Keith

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match

 


shouldn't your CASE construct be in the select list, i.e
SELECT tbl_item_bom_so.subline, CASE ... END INTO ... FROM ...WHERE ...
?
HTH
--
Patrick

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Data Entry and Query forms

2006-06-29 Thread Anthony Kinyage
Hi   I am new in PostgreSQL, I have just installed and created a database, I use MS-SQL 2000 and planning to  Migrate to Open Source.     PostgreSQL  is installed  on Linux Redhat Server. We have 50 Clients PC using WinXP.     Before continuing our Plans, I need to know how can I do with PostgreSQL in order to have Data Entry and Query Forms on clients side (How can I design Data Entry and Query Forms).     If you can advice me the open source Software to add on PostgreSQL to solve my problem will be better.     This will help the Management to accept PostgreSQL and plan Migration from   MS-SQL 2000.     Thanx     Anthony Kinyage 
		 
Try the all-new Yahoo! Mail . "The New Version is radically easier to use" – The Wall Street Journal

Re: [SQL] Data Entry and Query forms

2006-06-29 Thread Markus Schaber
Hi, Anthony,

Anthony Kinyage wrote:

> Before continuing our Plans, I need to know how can I do with PostgreSQL
> in order to have Data Entry and Query Forms on clients side (How can I
> design Data Entry and Query Forms).

PostgreSQL itsself is a database server, not a front-end form designer.

However, using the PostgreSQL ODBC driver, you should be able to
continue using your current front-ends (like Access).

If you don't like this, maybe you can look at GNU Enterprise,
OpenOffice.org database module, Delphi/Kylix or others. (I admit I've
used none of those yet, we use PostgreSQL as backend for "real"
applications.)

HTH,
Markus

-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] SELECT Aggregate

2006-06-29 Thread Aaron Bono
I am not familiar enough with how postgres optimizes the queries but won't this end up with total number of queries run on DB = 1 query + 1 query/row in first queryWhat would be more efficient on a large database - a query like Richard submitted (subquery in the select) or one like I submitted (join the two tables and then do a group by)?  My guess is it depends on the % of records returned out of the sales_orders table, the smaller the % the better Richard's query would perform, the higher the % the better the join would run.
The database I am working with aren't big enough yet to warrant spending a lot of time researching this but if someone with more experience knows what is best I would love to hear about it.Thanks,Aaron Bono
On 6/29/06, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> SELECT  trans_no,> customer,> date_placed,> date_complete,> date_printed,> ord_type,> ord_status,
  select ( SUM(sell_price) from soh_product where sales_orders.trans_no = soh_product.soh_num  ) as transact_sum,
> customer_reference,> salesman,> parent_order,> child_order,> order_number> FROMsales_orders> WHERE   (trans_no Like '8%' AND order_number Like '8%')
>  OR (trans_no Like '9%' AND order_number Like '8%')>  OR (trans_no Like '8%' AND order_number Like '9%')>  OR (trans_no Like '9%' AND order_number Like '9%')>  AND(warehouse='M')
>  AND(date_placed > (current_date + ('12 months ago'::interval)))> ORDER BY trans_no DESC


Re: [SQL] can any one solve this problem

2006-06-29 Thread Aaron Bono
I suggest you give a first stab at it and show us what you are doing.  That would help us see your table relationships better (please use inner/outer joins to make it clearer) and get a better idea of what you are trying to do.
Also, providing data examples like some of the other posts really help us help you get a good solution.-AaronOn 6/29/06, Penchalaiah P.
 <[EMAIL PROTECTED]> wrote:













emp_table(

Cdacno varchar (7) (primary key),

Personal_No varchar (10)(foreign key),

Name varchar (40));

 

personal_table (

Personal_No varchar (10) (primary
key),

Cdacno varchar (7),

Date_Of_Birth date);

 

unit_master (

Unit id varchar (10) (primary key),

Unit_Name varchar(25),

Unit_Location varchar(25));

 

Unit_Details_table (

 Unit_id varchar (foreign key)

CDA_No varchar(7) foreign key);

 

rank_table(

Rank_ID numeric(2)(primary key),

Rank_Code numeric(2),

Rank_Name varchar (25));

 

Rank_Date_table (

  Rank_Date__ID numeric NOT
NULL,

  CDA_No varchar(7) (foreign key),

  Rank_ID numeric(2));

 

My query is ….if I give cdacno I have to get per_no
from personal_table.. With this I have to display rank_name from rank_table
,name from emp_table, unit_name from unit_master..

 

Like that if I give per_no I have to get cdacno from
emp_table.. .. With this I have to display rank_name from rank_table ,name from
emp_table, unit_name from unit_master..

 

And here unit_name is depends on unit_details_table ..and
rank_name is depends on rank_date_table.. 

 

Doing these things first it has to check when we r giving
cdacno.. whether  per_no is null or not.. like this if I give per_no it
has to check cdacno is null or not.

 

Let me know the solution..

But I written one function to this to get per_no if I give
cdacno………. 


Re: [SQL] Data Entry and Query forms

2006-06-29 Thread Aaron Bono
I agree, using ODBC is probably a good first step.  Especially for M$Access.For developers I recommend the EMS Manager tools.  They are a commercial product but I have been very pleased with them and we use the tools every day.  They are not exactly like SQL Manager but they serve the same purpose.  See 
http://www.sqlmanager.net/ .Are there any web based management tools for PostgreSQL (like Mysql PHP Admin except for PostgreSQL)?  I thought I saw a post sometime back about one but don't remember the name.
-AaronOn 6/29/06, Markus Schaber <[EMAIL PROTECTED]> wrote:
Hi, Anthony,Anthony Kinyage wrote:> Before continuing our Plans, I need to know how can I do with PostgreSQL> in order to have Data Entry and Query Forms on clients side (How can I> design Data Entry and Query Forms).
PostgreSQL itsself is a database server, not a front-end form designer.However, using the PostgreSQL ODBC driver, you should be able tocontinue using your current front-ends (like Access).If you don't like this, maybe you can look at GNU Enterprise,
OpenOffice.org database module, Delphi/Kylix or others. (I admit I'veused none of those yet, we use PostgreSQL as backend for "real"applications.)HTH,Markus--Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GISFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org



Re: [SQL] Data Entry and Query forms

2006-06-29 Thread Joe

Aaron Bono wrote:
Are there any web based management tools for PostgreSQL (like Mysql PHP 
Admin except for PostgreSQL)?  I thought I saw a post sometime back 
about one but don't remember the name.


Yes, that's phpPgAdmin (http://phppgadmin.com).

Joe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Data Entry and Query forms

2006-06-29 Thread operationsengineer1
> Hi 
>   I am new in PostgreSQL, I have just installed and
> created a database, I use MS-SQL 2000 and planning
> to  Migrate to Open Source.
>
>   PostgreSQL  is installed  on Linux Redhat Server.
> We have 50 Clients PC using WinXP.
>
>   Before continuing our Plans, I need to know how
> can I do with PostgreSQL in order to have Data Entry
> and Query Forms on clients side (How can I design
> Data Entry and Query Forms).
>
>   If you can advice me the open source Software to
> add on PostgreSQL to solve my problem will be
> better.
>
>   This will help the Management to accept PostgreSQL
> and plan Migration from 
>   MS-SQL 2000.
>
>   Thanx
>
>   Anthony Kinyage

Anthony, welcome to the world of open source!

you can continue to use windows programs (like
msaccess) as your front end by connecting with odbc.

however, i chose to use php (open source, too), html
and css.  it has worked out well.

i use two php classes that have pretty dramatically
increased my productivity.

1. Manuel Lemos' forms class (phpclasses.org) - this
is a powerful class that enables lots of useful forms
features.
2. adodb db abstraction layer

php is relatively easy to pick up if someone has a
desire to do so.  i could even send you sample scripts
to help guide you get started with adodb, the forms
class and the code layout structure.

i'm seriously looking into migrating my skills over to
ruby on rails.  you can review some screencasts here:

http://www.rubyonrails.org/screencasts

i'll be using postgresql as my db for any rails
projects, too (many people use mysql).  once one
becomes proficient in ruby (language) and rails
(framework), development becomes more efficient for
many apps.  many of the concepts (including object
oriented programming) are more complex and require
more up front planning - which may be more difficult
for someone new to OO programming.

so, i think php and ruby on rails would both likely
work as front end programming languages / frameworks. 
of course, you'd need to work in html and css to
display a layout.  cross browser css code is
difficult, but if you were interested, i will let you
know about a resource that will help you a lot.

good luck.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Using In Clause For a Large Text Matching Query

2006-06-29 Thread Jason Farmer

Hello all, my first post to the pgsql mailing list!

There may be a completely better way to do this; if so please help point 
me in the right direction!


What I'm trying to do is run a query to partially match 300+ text fields 
to a PG table of about 200,000 rows. The idea is to pull out a portion 
of text from the original text fields and do a partial text match on 
anything in my PG table containing that substring.


I liked the idea of using a where IN(group) to do my comparisons, as in

select col1 from table1 where col1 in ('text1','text2')

however, this requires an exact string match. Is there any way to do a 
substring match inside of my IN group? Or can anyone think of a better 
way to do something like this?


Heres an example of something of how I'd like this to work:

Portion of 300 Original Text fields:
"brown cat"
"green dog"

2 rows of 200k+ Database table:
"brown kitty"
"green doggy"

We can assume a certain portion of the text is included in the DB table, 
so I want to be able to do a substring match on "brown" and "green" and 
in this case return both "brown kitty", and "green doggy". However the 
problem is, if I run the query on each of my 300 rows to scan 200,000 
rows in my DB is entirely too slow. So I was hoping to use the IN clause 
to create an IN group of about 300 items to scan the DB once.


I hope this makes sense, but if anything sounds confusing please let me 
know, and I will be sure to clarify! Thanks for any help or direction 
anyone can provide!!


- Jason Farmer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Documentation Generator for pl/pgsql

2006-06-29 Thread Daniel Caune








Hi,

 

Are you aware of any documentation generator for PL/PgSQL? 
I’m used to write function documentation using a javadoc style.  I’m
aware of a tool plsqldoc that generated documentation for PL/SQL code, whose syntax
is closed to PL/PgSQL.  Does someone use this tool for PL/PgSQL?

 

Thanks,

 

--

Daniel CAUNE

Ubisoft Online Technology

(514) 490 2040 ext. 3613

 








Re: [SQL] Using In Clause For a Large Text Matching Query

2006-06-29 Thread Richard Broersma Jr
> We can assume a certain portion of the text is included in the DB table, 
> so I want to be able to do a substring match on "brown" and "green" and 
> in this case return both "brown kitty", and "green doggy". However the 
> problem is, if I run the query on each of my 300 rows to scan 200,000 
> rows in my DB is entirely too slow. So I was hoping to use the IN clause 
> to create an IN group of about 300 items to scan the DB once.

You can probably do it.  However, you will have to pick a substring from your 
text field to
compare against.  In this case you seem to be choosing the first word, i.e. 
"brown" and "green".

so maybe:

select t1.col1
from
 table1 as t1,
(
   select distinct split_part( tblFileDump.Col1, ' ', 1 ) as samplecol
   from tblFileDump
) as fd1

where t1.col1 like '%' || fd1.samplecol || '%'
;

This is just an idea.  I've never used split_part or developed a sudo join this 
way.  But it may
work provided you and jump your text files into a temp table.

Notice:
http://www.postgresql.org/docs/8.1/interactive/functions-string.html
for the syntax for split_part().

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Documentation Generator for pl/pgsql

2006-06-29 Thread Jonah H. Harris

On 6/29/06, Daniel Caune <[EMAIL PROTECTED]> wrote:

Are you aware of any documentation generator for PL/PgSQL?


I have one somewhere... will have to find it though.  I've used the
PL/SQL one before, but I don't think it worked for PL/pgSQL for some
reason.


--
Jonah H. Harris, Software Architect | phone: 732.331.1300
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood Ave S, 2nd Floor| [EMAIL PROTECTED]
Iselin, New Jersey 08830| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Using In Clause For a Large Text Matching Query

2006-06-29 Thread Richard Broersma Jr
> Well, there is also:  href="http://www.postgresql.org/docs/8.1/interactive/functions-comparisons.html#AEN13377";>
> 
> /expression/ /operator/ ANY (/array expression/).  So, if you have a way 
> to preprocess you
> input text fields that you want matched 
> you could build a regex for each and feed them in an array to an '~ ANY' 
> expression like so (or,
> use ~* for case 
> insensitive matching):
> 
> SELECT col1
> FROM table
> WHERE col1 ~ ANY (ARRAY['regex1', 'regex2', ...]);

Good point,  But don't forget to include the list in your response. :-)

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] SELECT Aggregate

2006-06-29 Thread Phillip Smith








I’ve tried Aaron’s suggestion of
the GROUP BY and I don’t know much about it, but it ran for around 17
hours and still going (it had a dedicated Dual Xeon 3.0GHz box under RHEL4 running
it!)

 

I’ll give Richard’s suggestion
a try and see if that comes up any better. Like I said yesterday, this might just
be too much for Postgres and I’ll need to summarize it in the export from
our live system and add a new column before I import it to the sales_orders
table

 

Cheers,

-p

 

-Original
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Aaron
Bono
Sent: Friday,
 30 June 2006 01:25
To: Richard Broersma Jr
Cc: Phillip Smith;
pgsql-sql@postgresql.org
Subject: Re: [SQL] SELECT
Aggregate

 

I am
not familiar enough with how postgres optimizes the queries but won't this end
up with 

total number of queries run on DB = 1 query + 1 query/row in first query

What would be more efficient on a large database - a query like Richard
submitted (subquery in the select) or one like I submitted (join the two tables
and then do a group by)?  My guess is it depends on the % of records
returned out of the sales_orders table, the smaller the % the better Richard's
query would perform, the higher the % the better the join would run. 

The database I am working with aren't big enough yet to warrant spending a lot
of time researching this but if someone with more experience knows what is best
I would love to hear about it.

Thanks,
Aaron Bono 



On 6/29/06, Richard Broersma Jr <[EMAIL PROTECTED]>
wrote:

>
SELECT  trans_no,
>
customer,
>
date_placed,
>
date_complete,
>
date_printed,
>
ord_type,
>
ord_status,
  select
(

SUM(sell_price)

from soh_product

where sales_orders.trans_no = soh_product.soh_num
  )
as transact_sum, 
>
customer_reference,
>
salesman,
>
parent_order,
>
child_order,
>
order_number
> FROMsales_orders
> WHERE   (trans_no Like '8%' AND
order_number Like '8%') 
>  OR (trans_no
Like '9%' AND order_number Like '8%')
>  OR (trans_no
Like '8%' AND order_number Like '9%')
>  OR (trans_no
Like '9%' AND order_number Like '9%')
>  AND(warehouse='M')

>  AND(date_placed
> (current_date + ('12 months ago'::interval)))
> ORDER BY trans_no DESC







***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments





Re: [SQL] SELECT Aggregate

2006-06-29 Thread Richard Broersma Jr
> I've tried Aaron's suggestion of the GROUP BY and I don't know much about
> it, but it ran for around 17 hours and still going (it had a dedicated Dual
> Xeon 3.0GHz box under RHEL4 running it!)

Maybe, this query that you are trying to run is a good candidate for a 
"Materialize View".
http://archives.postgresql.org/pgsql-performance/2004-02/msg00279.php

Also before you run your query you might want to see the explain plan is.  
Perhap it is using a
sequencial scan in a place where an index can improve query preformance.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] SELECT Aggregate

2006-06-29 Thread Phillip Smith








Well whatdyaknow?? Being
a Postgres newbie I hadn't even played with indexes yet. They're awesome!!

 

Using Richard's
suggestion of the Sub-Select in the COLUMN list, combined with adding some
indexes, I can now return this in under 5 seconds!

 

I’ve included the
new SELECT query, as well as the definitions of the indexes below for anyone who’s
interested.

 

Thanks guys!

 

QUERY:

SELECT  trans_no,

    customer,

    date_placed,

    date_complete,

    date_printed,

    (SELECT  SUM(sell_price)


FROM   soh_product


WHERE  sales_orders.trans_no = soh_product.soh_num


) AS wholesale,

    ord_type,

    ord_status,

    customer_reference,

    salesman,

    parent_order,

    child_order,

    order_number

FROM    sales_orders

WHERE   (trans_no Like '8%' AND order_number
Like '8%')

 OR     (trans_no Like '9%'
AND order_number Like '8%')

 OR     (trans_no Like '8%'
AND order_number Like '9%')

 OR     (trans_no Like '9%'
AND order_number Like '9%')

 AND    warehouse='M'

 AND    date_placed > (current_date
+ ('12 months ago'::interval))

ORDER BY    trans_no DESC

 

INDEXES:

CREATE INDEX sales_orders_customer

  ON sales_orders

  USING btree

  (customer);

 

CREATE INDEX sales_orders_orderno

  ON sales_orders

  USING btree

  (order_number);

 

CREATE INDEX sales_orders_customer

  ON sales_orders

  USING btree

  (customer);

 

CREATE INDEX soh_product_prodcode

  ON soh_product

  USING btree

  (prod_code);

 

CREATE INDEX soh_product_transno

  ON soh_product

  USING btree

  (soh_num);

 

 

-Original Message-
From: Richard Broersma Jr [mailto:[EMAIL PROTECTED] 
Sent: Friday, 30 June 2006 10:51
To: Phillip Smith; pgsql-sql@postgresql.org
Subject: Re: [SQL] SELECT Aggregate

 

> I've tried Aaron's suggestion of the GROUP BY and I don't know
much about

> it, but it ran for around 17 hours and still going (it had a
dedicated Dual

> Xeon 3.0GHz box under RHEL4 running it!)

 

Maybe, this query that you are trying to run is a good candidate for a
"Materialize View".

http://archives.postgresql.org/pgsql-performance/2004-02/msg00279.php

 

Also before you run your query you might want to see the explain plan
is.  Perhap it is using a

sequencial scan in a place where an index can improve query
preformance.

 

 





***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments





Re: [SQL] SELECT Aggregate

2006-06-29 Thread Richard Broersma Jr
> Well whatdyaknow?? Being a Postgres newbie I hadn't even played with indexes
> yet. They're awesome!!
> Using Richard's suggestion of the Sub-Select in the COLUMN list, combined
> with adding some indexes, I can now return this in under 5 seconds!

Also, another way to improve preformance will be to analyze the affected 
tables.  Analyze will
ensure that the query planner has accurate statics by which it will use in 
picking fastest
possible query.

If you ever plan on updating or deleting records.  You will also need to vacuum 
the table.  And an
additional measure of maintance would be to re-index the database.

All of this is listing in the postgresql manual.  If you really want to ensure 
the best possible
speeds, it will be an important step to take.

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq