Re: [SQL] plpgsql loop question

2010-02-10 Thread Justin Graf
On 2/10/2010 11:29 AM, Andrea Visinoni wrote:
> hi,
> i have a table called "zones": idzone, zone_name
> and several tables called zonename_records (same structure), where 
> zonename is one of the zone_name in the "zones" table.
> What i want to do is a function that union all of this tables 
> dinamically based on "zones" table, this is what i've done so far:
>
> CREATE OR REPLACE FUNCTION get_all_records()
>   RETURNS SETOF record AS
> $BODY$DECLARE
> zones record;
> recs record;
> BEGIN
> for zones in select lower(zone_name) as n from zones loop
> for recs in select * from quote_ident(zones.n || '_records') loop
> return next recs;
> end loop;
> end loop;
> RETURN;
> END;
> $BODY$
>   LANGUAGE 'plpgsql' VOLATILE
>   COST 100;
>
> but i get this error!
>
> ERROR:  wrong record type supplied in RETURN NEXT
> CONTEXT:  PL/pgSQL function "get_all_records" line 9 at RETURN NEXT
>
> Andrea
>

Pg will not auto build the columns outputted from generic type record.  
The function needs  to describe  what the output is going  to look like.

The function can inherit the layout from a table or create the new pg 
data type but PG does not know what the data looks like to create the 
result set.  .


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 'image' table with relationships to different objects

2010-02-10 Thread Justin Graf
On 2/9/2010 6:59 AM, Richard Huxton wrote:
> On 09/02/10 07:49, Louis-David Mitterrand wrote:
>> Hello,
>>
>> In my database I have different object types (person, location, event,
>> etc.) all of which can have several images attached.
>>
>> What is the best way to manage a single 'image' table with relationships
>> to (potentially) many different object types while keeping referrential
>> integrity (foreign keys)?
>
> The "clean" way to do this would be with a number of joining tables:
>
> images(img_id, file_name, title ...)
> persons   (psn_id, first_name, last_name, ...)
> locations (loc_id, loc_name, lat, lon, ...)
> events(evt_id, evt_name, starts_on, ends_on, ...)
>
> person_images   (psn_id, img_id)
> location_images (loc_id, img_id)
> event_images(evt_id, img_id)


Another why that reduces the number of tables and simplifies the selects 
and allows linking to unlimited number of objects is something like this

is something like this
Create table images (img_id serial , img_data bytea );

Create table image_related( img_rel_id serial, img_rel_img_id int, 
img_link_key int, img_link_from char(10) );

Create table persons   (psn_id serial, first_name text, last_name text) ;
create table locations (loc_id serial, loc_name text) ;
create table events(evt_id serial, evt_name text, starts_on 
timestamp, ends_on timestamp);

Insert into images values (default, null), (default, null), (default, null);

Insert into persons values ( default, 'me me', 'yes itsme');
Insert into locations values (default,  'I home');
Insert into events values (default, 'friends party', now(),  now() );
insert into image_related values (default, 1, 1, 'persons'), (default 
,2, 1, 'events'), (default ,3, 1, 'locations'), (default , 2, 1, 'persons');


Select img_data, first_name  from persons, images, image_related
 where img_id = img_rel_img_id
 and img_link_key = psn_id
 and img_link_from  = 'persons'


then create a rule on img_related before insert and update to make sure 
the parent records exist  for integrity checks.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] MSSQL to PostgreSQL - Issue trying to generate unique ID using actual date

2010-03-18 Thread Justin Graf
On 3/17/2010 9:52 AM, Ignacio Balcarce wrote:
>
> Hi all,
>
> I am facing a problem trying to convert from MSSQL procedure to 
> PostgreSQL function.
>
> CREATE PROCEDURE dbo.THUBAN_SP_GENERATEID
>
>  @NEWID VARCHAR(20) OUTPUT
>
> AS
>
> SET @NEWID = (
>
> SELECT REPLACE(SUBSTRING(CONVERT(CHAR(10),GETDATE(),20 ),1,10),'-','')
>
> + CAST(REPLICATE(0,8-LEN (ISNULL(CAST(SUBSTRING(MAX(SEQ_ID),9,8) AS
>
> INTEGER),0) + 1)) AS VARCHAR)
>
> + CAST(ISNULL(CAST(SUBSTRING(MAX(SEQ_ID),9,8) AS INTEGER),0) + 1 AS
>
> VARCHAR)
>
> FROM THUBAN_SEQ
>
> WHERE SUBSTRING(SEQ_ID,1,8)=
>
> REPLACE(SUBSTRING(CONVERT(CHAR(10),GETDATE(),20 ),1,10),'-','')
>
> )
>
> INSERT INTO THUBAN_SEQ VALUES (@NEWID)
>
> SELECT @NEWID AS ITEM_ID;
>
> GO
>


I surprised this works in MSSQL

CREATE SEQUENCE THUBAN_SEQ
   INCREMENT 1
   MINVALUE 1
   MAXVALUE 9223372036854775807
   START 1
   CACHE 1;


Now for the function to generate the ID with the date leading

CREATE OR REPLACE FUNCTION THUBAN_SP_GENERATEID()
RETURNS VARCHAR

AS $$

BEGIN

--now we get the next value from the thuban_seq and add the date to the  
front.

return  to_char( current_timestamp,   'MMDD')::varchar || 
nextval('THUBAN_SEQ')::varchar

RETURN NEWID;

END;

$$ LANGUAGE plpgsql;

If this is not what your after you need to give more information what 
you want to accomplish





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] MSSQL to PostgreSQL - Issue trying to generate unique ID using actual date

2010-03-18 Thread Justin Graf
On 3/18/2010 12:53 PM, Ignacio Balcarce wrote:
>
> Justin,
>
> Thanks in advance for your email. I forgot to tell than everyday IDs 
> must start from 0. So… sequence id would look like: MMDD 0001, 
> MMDD 0002, etc.
>
> Is there any way to make this sequence start from 0 every day?
>
> Thanks & Regards,
>
> Ignacio
>
> -
>
>
>
--we need to create a table so we keep track sequence number and when to 
reset the count

create table sequ_id ( id_number int, sequ_name char(25), date_lastrun );

--insert a record ;
insert into sequ_id  values (1, 'thuban_seq', current_date);


  Now for the function to generate the ID with the date leading

  CREATE OR REPLACE FUNCTION THUBAN_SP_GENERATEID()
  RETURNS VARCHAR

  AS $$

  BEGIN
--now update the sequ_id table so we know the value we get makes sense,
Update sequ_id set id_number = 1 where sequ_name= 'thuban_seq' and 
date_lastrun <> current_date;

  --now we get the next value from the thuban_seq and add the date to 
the  front.

  return  to_char( current_date,   'MMDD')::varchar || ' ' || 
(Select lpad( id_number::char, 7, '0' )::varchar from sequ_id where 
sequ_name= 'thuban_seq' and date_lastrun)

Update sequ_id set id_number = (id_number + 1) where sequ_name= 
'thuban_seq';


  END;
$$ LANGUAGE plpgsql;

this will do what you want.

now i  have NOT  tested this but should get you closer, inside of the 
god awful code from before.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] MSSQL to PostgreSQL - Issue trying to generate unique ID using actual date

2010-03-18 Thread Justin Graf
OOPS did not mean to click send

On 3/18/2010 12:53 PM, Ignacio Balcarce wrote:
>
> Justin,
>
> Thanks in advance for your email. I forgot to tell than everyday IDs 
> must start from 0. So… sequence id would look like: MMDD 0001, 
> MMDD 0002, etc.
>
> Is there any way to make this sequence start from 0 every day?
>
> Thanks & Regards,
>
> Ignacio
>
> -
>
>
>
--we need to create a table so we keep track sequence number and when to 
reset the count

create table sequ_id ( id_number int, sequ_name char(25), date_lastrun 
date);

--insert a record ;
insert into sequ_id  values (1, 'thuban_seq', current_date);


--- Now for the function to generate the ID with the date leading

  CREATE OR REPLACE FUNCTION THUBAN_SP_GENERATEID()
  RETURNS VARCHAR

  AS $$
declare creturn varchar ;

  BEGIN
--now update the sequ_id table so we know the value we get makes sense,
Update sequ_id set id_number = 1 where sequ_name= 'thuban_seq' and 
date_lastrun <> current_date;

  --now we get the next build the ID go to the table get the current 
value add some zeros in front and add the date to the  front.

creturn = to_char( current_date,   'MMDD')::varchar || ' ' || 
(Select lpad( id_number::char, 7, '0' )::varchar from sequ_id where 
sequ_name= 'thuban_seq' )

--update the sequence table
Update sequ_id set id_number = (id_number + 1) where sequ_name= 
'thuban_seq';
--return the value
return creturn ;
  END;
$$ LANGUAGE plpgsql;

this will do what you want.

now i  have NOT  tested this but should get you closer, inside of the 
god awful code from before.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] Inserting Multiple Random Rows

2010-04-28 Thread Justin Graf
That won't work because Order by does not recompute Random() once gets a 
number it  stops

you need to generate a series of random numbers then select each record 
one at a time out of cities table .  You will have to write a plsql 
function to do this As any join will cause the result to be ordered.

Here is example of pl-sql procedure that inserts records randomly into a 
table from a another table.  This is an  excerpt from the function.  
There is more logic that limits the random result set size and  rules 
not to repeat a given number.

--First create cursor of the source records
 OPEN _questions SCROLL for  (Select  *from questions
 where quest_expire > now()::date
 and trim( both '' from quest_type)  = 
_classexams.exam_quest_type
 and trim( both '' from quest_level) = 
_classexams.exam_level
 order by quest_id );

--need to limit the number range created by random so not to exceed the 
record count created by the Cursor
select count(quest_id) into _rcount from educate.questions
 where quest_expire > now()::date
 and trim( both '' from quest_type)  = 
_classexams.exam_quest_type
 and trim( both '' from quest_level) = 
_classexams.exam_level ;

Generate a Random list of  of numbers
for _randlist IN (Select num from (
 select round(random()*1000)::int as num from 
generate_series(1,10)) rand
 where num <= _rcount and num > 0 ) LOOP

FETCH ABSOLUTE _randlist.num from _questions into _quest ;

Next Insert into Into the destination  Table

end loop;

On 4/28/2010 12:22 PM, Gary Chambers wrote:
> All,
>
> I have a table of user addresses that I'm trying to randomly populate
> with data from a cities table.  Using the following query:
>
> INSERT INTO useraddrs(userid, addrdesc, city, stprov)
> SELECT u.userid, 'Home', c.cityname, c.stateabbr
> FROM users u, cities c
> WHERE u.userid NOT IN (SELECT userid FROM useraddrs)
>  AND cid=(SELECT cid FROM cities ORDER BY RANDOM() LIMIT 1);
>
> I am able to achieve most of what I am trying to accomplish, but once
> the random number is selected, it doesn't change.  What am I missing?
> Thank you in advance.
>
> -- Gary Chambers
>
> /* Nothing fancy and nothing Microsoft! */
>
>


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Inserting Multiple Random Rows

2010-04-28 Thread Justin Graf
On 4/28/2010 1:48 PM, Gary Chambers wrote:
> pen?
>
> The clouds parting, choirs of angels singing, and fireworks
> celebrating the veil of my obtuseness being lifted, and my grasp and
> command of SQL to be complete and infinite.  None of which appears
> will ever happen...
>
> -- Gary Chambers
>
> /* Nothing fancy and nothing Microsoft! */
>
>
You now what *_might _*work

Insert into useraddrs(userid, addrdesc, city, stprov)
select u.userid, 'Home', c.cityname, c.stateabbr FROM users u, cities c
WHERE u.userid NOT IN (SELECT userid FROM useraddrs)
and cid IN (select round(random()*1)::int as num from 
generate_series(1,10)) rand )

As you don't care if a record repeats


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SPAM]-D] [SQL] How to find broken UTF-8 characters ?

2010-04-29 Thread Justin Graf
On 4/28/2010 10:34 PM, Andreas wrote:
> Hi,
>
> while writing the reply below I found it sounds like beeing OT but 
> it's actually not.
> I just need a way to check if a collumn contains values that CAN NOT 
> be converted from Utf8 to Latin1.
> I tried:
> Select convert_to (my_column::text, 'LATIN1') from my_table;
>
> It raises an error that says translated:
> ERROR:  character 0xe28093 in encoding »UTF8« has no equivalent in 
> »LATIN1«
>
> I'd like to select all those records.
> When I know which record has faulty content I can correct it.
>
> If this is really OT on the SQL list then please tell me where to ask.

That's easy enough  you need to write an Update statement using regular 
expression to replace  all non legal Latin/ASCII char
http://www.postgresql.org/docs/8.4/interactive/functions-string.html 


the command is regexp_replace('MyBadString', 
'SearchForallNoneAsccIIChars', 'ReplaceWithBlankString ')

I'm pretty sure this is the regualr expression to find all non ASCII 
chars.. [^\x00-\xFF]

To test is try to  Select regexp_replace( MyColumn, '[^\x00-\xFF]', ' ') 
from screweduptable

If the regular expression does not work, I'm dry well, when it comes to 
regular expressions.  Dd i say i hate regular expression.  It dam near 
impossible to write.
Once you get the expression right and working

the Update is straight forward.
Update mytable set mybadcolumn = regexp_replace( mybadcolumn, 
'[^\x00-\xFF]', ' ')

>> Select covert('MyUtf8', 'UTF8', 'LATIN')
>> or
>> Select covert_to('MyUtf8',  'LATIN')
>
> I found them before but didn't understand their output.
> e.g.
> Select convert('1aäßx', 'utf8', 'LATIN1') ;
> Result = "1a\344\337x"
> so it translated  ä = 344  and  ß = 337. The other 3 are just as they 
> were before.
> How can this be valid in a single byte charset like Latin1?
> Especially as ä, ß are E4 and DF.
> Why do they come out as escaped codes when they are in Latin1 aswell 
> as 1, a and x?

Someone with more knowledge how convert()  works is going to have to 
explain why they have been escaped. PgAdmin may have escaped them.  But 
those characters are valid Latin1 characters

http://en.wikipedia.org/wiki/%C3%84
http://en.wikipedia.org/wiki/%C3%9F
ß = latin Beta

It seems Access and Excel are putting in extra bits of data into the 
field.  In the past i had to change inserts/updates from Access so it 
would send data in a specific char encoding.  I had problems where 
Access was using a Windows Encoding, the ODBC converted it to Latin1, 
and the MsSQL Database put it in UTF8.  It was no fun cleaning it up.

>
>> What ever pg client library used to move Excel data to PG my have 
>> incorrectly converted some of the data or moved formatting 
>> information into the database.  I have seen Access and Excel do 
>> mightily odd things when connecting to DB's  I don't know about 
>> current versions but 2000 and 2003 Excels did really stupid things 
>> when trying to write to DB's including MSSQL.
>
> Cute ... we use Access 2000 and 2003   :(

Been more Screwed by Excel and Access flakiness, and Access Programmers 
thinking they are DBA's.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] Column Specific Update Trigger Routine

2010-05-06 Thread Justin Graf
On 5/6/2010 4:12 PM, Plugge, Joe R. wrote:
>
> I am trying to create a update trigger on a table that basically will 
> only fire when a specific column is updated.  I am using version 8.4.3.
>
> My plan of attack was to always fire on any row update, and pass in 
> the OLD and NEW column that I want to check.
>
> CREATE TRIGGER check_lockout
>
> AFTER UPDATE ON acct_table
>
> FOR EACH ROW
>
> EXECUTE PROCEDURE 
> acct_unlock(OLD.userid,OLD.ownerid,OLD.password,NEW.password);
>
> This fails with :
>
> [postg...@linux1505 ~]$ cat check_lockout_trig.sql | psql testdb
>
> ERROR:  syntax error at or near "OLD"
>
> LINE 4: EXECUTE PROCEDURE 
> acct_unlock(OLD.userid,OLD.ownerid,OLD.password,NEW.password);
>
> What am I doing wrong? Or is there a better way to go about this?
>
You don't call the trigger procedure with the old and new as parameters

new and old are automatically created for the function acct_unlock()

CREATE TRIGGER check_lockout

 AFTER UPDATE ON acct_table

 FOR EACH ROW

 EXECUTE PROCEDURE acct_unlock();


Next the trigger function would look something like this

create or replace function acct_unlock()
returns trigger as
$$

if (OLD.userid <> NEW.password)
 do  something
end if;

$$


keep in mind the acct_unlock must be  returns trigger

Then return either NEW or the OLD record
OLD if  not changing the record or NEW if  the updated values are to be 
stored in the table.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [SQL] Celko take on EAV

2010-05-07 Thread Justin Graf
On 5/7/2010 12:33 PM, Richard Broersma wrote:
> 
>
> I'm rereading my Joe Celko's SQL Programming Style and I noticed an
> interesting comment regarding the EAV model (of course he discourages
> its use):
>
> "There are better tools for collecting free-from data."
>
> What tools was he referring to?
>

http://en.wikipedia.org/wiki/Entity-attribute-value_model
has notes on other models.

All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-12 Thread Justin Graf
On 5/12/2010 1:41 AM, Louis-David Mitterrand wrote:
> Hi,
>
> I have a large dataset (page 1 at http://www.cruisefish.net/stat.md) and
> am in the process of developping a pager to let users leaf through it
> (30K rows).
>

That's not that big of a record set.

> Ideally I'd like to know when requesting any 'page' of data where I am
> within the dataset: how many pages are available each way, etc.
>
> Of course that can be done by doing a count(*) query before requesting a
> limit/offset subset. But the main query is already quite slow, so I'd
> like to minimize them.
>

What do you mean by quite slow??

On a 30K record table count() and query should speed should be a problem..

> But I am intrigued by window functions, especially the row_number() and
> ntile(int) ones.
>
> Adding "row_number() over (order by)" to my query will
> return the total number of rows in the first row, letting my deduce the
> number of pages remaining, etc. row_number() apparently adds very little
> cost to the main query.
>

That will get a sequential number,  but you still don't know how many 
records are in the table,  limit and offset block that value.
I don't see how this helps?

  Limit and Offset with Total Record count tell us where we are in the 
record set and which page we are on.

RecordCount/Limit = Number of pages
CurrentPage = (offset%RecordCount)/Limit

to complicate things further what if the site allows user to change the 
number of records displayed per page.  The pager logic needs to figure 
out how many records need to be return per page, and what the next and 
previous iterations are. Without the total count records I don't see how 
that is even possible.

I have written pagers in ASP and PHP


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-12 Thread Justin Graf
oops typos
On 5/12/2010 1:41 AM, Louis-David Mitterrand wrote:
> Hi,
>
> I have a large dataset (page 1 at http://www.cruisefish.net/stat.md) and
> am in the process of developping a pager to let users leaf through it
> (30K rows).
>

That's not that big of a record set.

> Ideally I'd like to know when requesting any 'page' of data where I am
> within the dataset: how many pages are available each way, etc.
>
> Of course that can be done by doing a count(*) query before requesting a
> limit/offset subset. But the main query is already quite slow, so I'd
> like to minimize them.
>

What do you mean by quite slow??

On a 30K record table count() and query speed should not be a problem..

> But I am intrigued by window functions, especially the row_number() and
> ntile(int) ones.
>
> Adding "row_number() over (order by)" to my query will
> return the total number of rows in the first row, letting my deduce the
> number of pages remaining, etc. row_number() apparently adds very little
> cost to the main query.
>

That will get a sequential number,  but you still don't know how many 
records are in the table,  limit and offset block that value.
I don't see how this helps?

  Limit and Offset with Total Record count tell us where we are in the 
record set and which page we are on.

RecordCount/Limit = Number of pages
CurrentPage = (offset%RecordCount)/Limit

to complicate things further what if the site allows user to change the 
number of records displayed per page.  The pager logic needs to figure 
out how many records need to be return per page, and what the next and 
previous iterations are. Without the total record count  I don't see how 
that is even possible.

I have written pagers in ASP and PHP



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-13 Thread Justin Graf
On 5/13/2010 4:41 AM, silly sad wrote:
>
>>> First u count(*) the rows and select a requested page
>>> returning to a client the count result bundled "with a page of rows"
>>>
>>> (1) client renders the acquired rows
>>> (2)__memorize__ what part of the data he just got
>>> (3) and stores the count result to calculate "the pager div"
>>>
>>> all the subsequent clicks on "the pager div" should not immediately
>>> generate requests and decides if the request is needed.
>>
>> Yes, rendering the results throught ajax is a good idea, but one has to
>> be careful not to expose one's LIMIT and OFFSET to the client, but only
>> the "page" number. Or else the client could query the whole data set. A
>> lot of "professional" web site have that hole.
>>
>
> this is not a hole, it is only a matter of aesthetic
>
Silly Sad is right this is not a hole but a matter of aesthetics.  To 
keep the code simple and limit the amount of things that have to be 
tracked with client session on the server,  I pass the limit and offset 
to the client normally in a url/link.  This also solves the problem if 
the users sessions expires, the information is lost meaning the user has 
to restart.  Very annoying.

So the urls look something like this   
www.mywebsit.com/index.php?module=getthedata&limit=10&offset=30&orderby=5

On the server  set the these three data types to integer to block sql 
injection.

I really don't care if the user sends a command to get all 10,000 
records.  If you block that, all that been accomplished is slowing down 
data harvesting and eating up even more resources, as the client/user 
will send ever more GETs to harvest data.  Nothing has been accomplished.

to keep the casual on looker for screwing with the url encode in 
base64.  It keeps honest people honest but the hackers will pull that 
apart in a second.

All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best paging strategies for large datasets?

2010-05-13 Thread Justin Graf
On 5/13/2010 3:43 AM, Louis-David Mitterrand wrote:
**snip***
>
>> What do you mean by quite slow??
>>  
> Like several seconds. I have to cache the results.
>

Well then i suggest posting the queries to Performance or here and let 
us take a look them
don't forget to include the explain/analyze, and number of records in 
each table.
>> On a 30K record table count() and query speed should not be a problem..
>>  
> This query is a large multi-join of times series data, not a single
> table. And it's not (prematurely :) optimized.
>
> I'm planning a materialized view for it.
>

here read this
http://www.pgcon.org/2008/schedule/events/69.en.html

The question that begs to be asked how big are the tables the query is 
accessing??

if its not hundreds of thousands to millions of records or on a stone 
age Server , my guess is  the  query can be improved.


> **snip**
>> RecordCount/Limit = Number of pages
>> CurrentPage = (offset%RecordCount)/Limit
>>  
> These simple formulas we bill handy.
>

don't forget to use floor on these in what ever language your 
programming in




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] user function and bind

2010-05-19 Thread Justin Graf
On 5/19/2010 9:56 AM, David Harel wrote:
> Hi,
> I need an example how to write user function with columns binding and 
> how to use it on PHP
> -- 
> Thanks.
>


I'm not sure i understand your   question.  You want a function to 
return record type correct??








All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to construct sql

2010-06-02 Thread Justin Graf
On 6/2/2010 12:31 PM, Wes James wrote:
> On Wed, Jun 2, 2010 at 10:55 AM, Oliveiros
>   wrote:
>
>> Hi,
>> Have you already tried this out?
>>
>> select MAX(page_count_count) - MIN(page_count_count)  from page_count group
>> by page_count_pdate.
>>
>>
>> Best,
>> Oliveiros
>>  
> Oliveiros,
>
> Thx that mostly works.  I just tried it and on the days there is only
> 1 entry it is 0 since max is the same as min so max - min is 0.  Is
> there a way to take in to account the 1 entry days?
>
> Again thx - I appreciate your help :)
>
> -wes
>
>
Put in a case

select
case when MAX(page_count_count) - MIN(page_count_count)>  0 then
MAX(page_count_count) - MIN(page_count_count)  
else
MAX(page_count_count)
from page_count
group by page_count_pdate.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to construct sql

2010-06-02 Thread Justin Graf
On 6/2/2010 2:52 PM, Wes James wrote:
>
>   **snip***
> Thx it is closer (with an end in the case):
>
> select
> case when MAX(page_count_count) - MIN(page_count_count)>  0 then
> MAX(page_count_count) - MIN(page_count_count)
> else
> MAX(page_count_count)
> end as day_max
> from page_count
> group by page_count_pdate order by page_count_pdate;
>
> the else puts out the total count on that day.  I would need
> max(page_count_count) - max(page_count_count_of_previous_day)
>
> thx,
>
> -wes
>

A windowing query makes sense in this case  which i'm not very good at



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL]

2010-07-06 Thread Justin Graf
I wrote an article covering this on the wiki

http://wiki.postgresql.org/wiki/BinaryFilesInDB

I need to update to for 9.0  as bytea now allows HEX format strings

http://developer.postgresql.org/pgdocs/postgres/datatype-binary.html








All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL]

2010-07-07 Thread Justin Graf
On 7/7/2010 12:00 AM, silly sad wrote:
> On 07/06/10 21:52, Justin Graf wrote:
>
>> I wrote an article covering this on the wiki
>>
>> http://wiki.postgresql.org/wiki/BinaryFilesInDB
>>  
> there are some "red flags" in communication
> (particularly reading papers)
> one of them is "binary data" which ITSELF IS NONSENCE.
>

WHAT???

You do understand that if you don't like it you can spend time fixing it.


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strangest thing happened

2010-07-07 Thread Justin Graf
Are you using PG's  sequence/auto increment???

If so.
Once PG fires off the nextval() for the sequence that number is 
considered used and gone even if the transaction that called nextval() 
is rolled back

Depending on how the app is written nextval() might be called, but allow 
the User to cancel the invoice creation before the insert into table is 
completed eating up Invoice numbers

To reset Sequences number call
Select setval('Sequence_Name', VAlue_To_Set_To);

Most people ignore this kind of annoyance when sequence numbers jump.  
Now if it happens all the time where every X hours eating up Z number of 
sequence numbers then one needs to dig into the logs and figure out what 
is calling nextval()

Search the logs to see what is calling nextval('My_Sequence')

You may need to turn up logging to find it.


On 7/7/2010 2:59 PM, John wrote:
> I am the only developer, DBA etc.. for a small project.  Today (yesterday was
> everything was perfect) many of the sequence numbers fell behind what is the
> actual PK value.   For example the invoice PK sequence current value = 1056
> but the table PK was 1071.  Nobody (other than myself) knows how to
> edit/access the postgres server.  So
>
> 1. Does anyone know how this could have happened?? Other than human
> interaction.
>
> 2. Does anyone have a script to reset the sequences to match the tables?
>
> Thanks in advance,
>
> Johnf
>
>



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strangest thing happened

2010-07-07 Thread Justin Graf
On 7/7/2010 3:42 PM, Ross J. Reedstrom wrote:
>
> Justin, you're missing that John reported that the sequences are
> _behind_ the table. This only happens for me if I've been doing
> bulk data loads. Then I use:
>
> select setval(sequence_name,max(serial_id_column)) from table_with_serial_id;
>
> You do need to trackdown how this might have happened, though. Any
> clever code doing it's own 'serial' incrementing?
>
> Ross
>


Yes i did miss read his statement,  oops =-O

The highest PK value in the table is 1071  but the next sequence is 
1056.  That's  interesting and could be a big problem

Quoteing JonF


I'm thinking/guessing it had something to do with
vacumn or the backup.
The backup is a windows product "exec" and I'm using a

special plug-in from exec for the Linux backup.  But I still can't see this
actually happening.

--

BakupExec HMMM. Are you doing a file level backup, meaning backing up 
PGDATA folder or are you doing pg_dump??

I don't think its a backup issue, unless you have done a restore. Which 
this would say there are more problems else where


Are there invoices that use up numbers 1056 to 1071 in that table???

Does the app allow for resetting Sequence in a admin interface???  Many 
apps have such features and someone could have accidentally rest the 
value???

I would be looking at the log files for the Inserts into that table as a 
means to track down what is the cause.  If there are no log files or 
don't have enough detail, crank up the logging level and wait for it to 
happen again???





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strangest thing happened

2010-07-08 Thread Justin Graf
On 7/7/2010 5:41 PM, John wrote:
> On Wednesday 07 July 2010 03:14:40 pm Justin Graf wrote:
>
>> I would be looking at the log files for the Inserts into that table as a
>> means to track down what is the cause.  If there are no log files or
>> don't have enough detail, crank up the logging level and wait for it to
>> happen again???
>>  
>
> That is scary - let it happen again  I'm not keeping enough info in the
> log.  I actually turned off most of the info the log files are gathering
> because the system has been running for 6-7 months without an issue.  I just
> got a call around noon telling me something was going wrong.  That's when I
> discovered the sequences were the wrong values.  I'm sure there has to be
> some sort of real explanation - but I don't know what it is.
>
> Johnf
>
>

Sometimes we just don't have a choice but to let things become broke 
again to figure out what is the cause.

I had an odd case where Parent records could become deleted every once 
in a while.  For what appeared to be no rhyme or reason
The app had around 1000 functions in pg/psql not counting triggers,  so 
figuring out the cause by just staring at the code was not practical.
The log was already was set to record
 log_statement (all)
 log_line_prefix ('User %u, DB%d, Client%r, PID %p, Time %m, SID %c, 
LineCount %l , TID %x);
 log_destination ('csvlog')

The logs rotated out every 30 days.  So I had good sample of the 
commands sent to PG to figure out what went wrong

As the statements are logged all i had to do was search for
 Delete from cohead where cohead_id = 

this gave me the transaction ID and the Session ID to start backtracking 
to see if a Trigger or Function issued the delete.  Once I had figured 
out that it was pg/psql procedure, i needed to figure out where in the 
application called this seriously miss thought out  DELETE Sales Order 
function.  I threw in a RAISE EXCEPTION in the psql and waited for the 
Data entry people to come screaming.

Around 3 weeks later a data entry girl came and found me asking to 
explain why her computer ordered her to come find me ASAP and why 
nothing else matter but stop and find me.

With that error, I now the call stack from the app to figure out the 
serious of events that allowed a Sales Order to be deleted.

After all was said and done there was a total of 50 records deleted out 
of 60,000 (not even 0.1%)  not a big deal unless you are customer who's 
order was deleted.

My experience has taught me never turn off logging because we never when 
we may need it.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to escape _ in select

2010-07-28 Thread Justin Graf
On 7/28/2010 12:35 PM, Wes James wrote:
> I'm trying to do this:
>
> select * from table where field::text ilike '%\_%';
>
> but it doesn't work.
>
> How do you escape the _ and $ chars?
>
> The docs say to use \, but that isn't working.
>
> ( http://www.postgresql.org/docs/8.3/static/functions-matching.html )
>
> The text between '%...%' can be longer, I'm just trying to figure out
> how to escape some things.  I've found that ' works with '' and \
> works with \\
>

Instead of escaping how about looking at double $ quoting.

http://www.postgresql.org/docs/8.4/static/sql-syntax-lexical.html

4.1.2.4. Dollar-Quoted String Constants

While the standard syntax for specifying string constants is usually 
convenient, it can be difficult to understand when the desired string 
contains many single quotes or backslashes, since each of those must be 
doubled. To allow more readable queries in such situations, PostgreSQL 
provides another way, called "dollar quoting", to write string 
constants. A dollar-quoted string constant consists of a dollar sign 
($), an optional "tag" of zero or more characters, another dollar sign, 
an arbitrary sequence of characters that makes up the string content, a 
dollar sign, the same tag that began this dollar quote, and a dollar 
sign. For example, here are two different ways to specify the string 
"Dianne's horse" using dollar quoting:

$$Dianne's horse$$
$SomeTag$Dianne's horse$SomeTag$

Notice that inside the dollar-quoted string, single quotes can be used 
without needing to be escaped. Indeed, no characters inside a 
dollar-quoted string are ever escaped: the string content is always 
written literally. Backslashes are not special, and neither are dollar 
signs, unless they are part of a sequence matching the opening tag.

It is possible to nest dollar-quoted string constants by choosing 
different tags at each nesting level. This is most commonly used in 
writing function definitions. For example:

$function$
BEGIN
 RETURN ($1 ~ $q$[\t\r\n\v\\]$q$);
END;
$function$

Here, the sequence $q$[\t\r\n\v\\]$q$ represents a dollar-quoted literal 
string [\t\r\n\v\\], which will be recognized when the function body is 
executed by PostgreSQL. But since the sequence does not match the outer 
dollar quoting delimiter $function$, it is just some more characters 
within the constant so far as the outer string is concerned.

The tag, if any, of a dollar-quoted string follows the same rules as an 
unquoted identifier, except that it cannot contain a dollar sign. Tags 
are case sensitive, so $tag$String content$tag$ is correct, but 
$TAG$String content$tag$ is not.

A dollar-quoted string that follows a keyword or identifier must be 
separated from it by whitespace; otherwise the dollar quoting delimiter 
would be taken as part of the preceding identifier.

Dollar quoting is not part of the SQL standard, but it is often a more 
convenient way to write complicated string literals than the 
standard-compliant single quote syntax. It is particularly useful when 
representing string constants inside other constants, as is often needed 
in procedural function definitions. With single-quote syntax, each 
backslash in the above example would have to be written as four 
backslashes, which would be reduced to two backslashes in parsing the 
original string constant, and then to one when the inner string constant 
is re-parsed during function execution.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Domains, casts, and MS Access

2010-08-04 Thread Justin Graf
On 8/4/2010 1:56 PM, Richard Broersma wrote:
> On Wed, Aug 4, 2010 at 11:51 AM, Peter Koczan  wrote:
>
>
>> This is one of my first forays into ODBC, so I didn't know that was a
>> possibility. Is there any place where these are documented? Searching
>> for ODBC options yields info on connection options, but none on
>> behavior that I could find.
>>  
> I know that there are a couple of options that affect the
> representation of Booleans in the odbc driver.  I'm not sure it will
> do what you need though.
>
> However, here is the official documentation: (hopefully it helpful)
>
> http://psqlodbc.projects.postgresql.org/
>
>
>
Yes there is an option to change bools to char

* *Data Type Options:* affects how some data types are mapped:
  o /Text as LongVarChar/: PostgreSQL TEXT type is mapped to
SQLLongVarchar, otherwise SQLVarchar.
  o /Unknowns as LongVarChar/: Unknown types (arrays, etc) are
mapped to SQLLongVarChar, otherwise SQLVarchar
  o /Bools as Char/: Bools are mapped to SQL_CHAR, otherwise to
SQL_BIT.

My memory is fuzzy but there are some additional settings in Access that 
allows data type mapping...



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql