[SQL] Re: Casting

2001-01-06 Thread rob

Yes.  Casts are done like this:

SomeFiled::int4

or SomeField::text

Double  colons is the key '::'
--rob


- Original Message -
From: "Thomas SMETS" <[EMAIL PROTECTED]>
To: "psql novice" <[EMAIL PROTECTED]>; "psql sql"
<[EMAIL PROTECTED]>
Sent: Thursday, January 04, 2001 7:31 PM
Subject: Casting


>
>
> Hi,
>
> In pgsql
>
> I'm removing charaters from a String which should be numbers.
> I then want to make calculations on these numbers (calculate the ISBN
> number).
>
> Do I have to cast the char into int's before I can do the calulations.
>
> Also I looked in the User manual but could not find the modulo function
> where is it ?
>
> tx,
>
> Thomas,
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Thu Jan  4 20:19:03 CET 2001
>
> Thomas SMETSe-mail : [EMAIL PROTECTED]
> Av. de la Brabançonne 133 / 3   Tel. : +32 (0)2 742. 05. 94.
> 1030 Bruxelles
> === Quote of the Day =
> Jealousy is all the fun you think they have.
> = End of Quote ===
>




[SQL] plpgsql grief

2001-02-12 Thread rob

Hi, I'm having some real headache problems here. Apologies for the
length, i just want to get it all out now :)

I figured moving some 'simple' db code from my application to it's more
natural home in the db would work out. Bummer. Not only do i have to run
7.1 (beta 4) to be able to dynamically generate queries, I'm finding it
*extrememly* difficult to get to get my simple functions to work (plus
for the 'widest used open source db' i'm finding examples very hard to
come by)

Before I start if anyone has any pointers to coding examples (and I mean
a little more than the standard postgres docs :) I'd be eternally
greatful. Failing that, can anyone help with these two simple (ahem)
codelets :

Example 1 :

create function testfunc (text) returns int4 as '
declare
  sql varchar;
  res int4;
begin
  sql=''SELECT INTO res2 id FROM ''||$1 ;
  execute sql ;
  return res;
end;
' language 'plpgsql' ;

simple function to return the id field of a table (passed to the
function). ok, not a real world example, however i do this :

#select testfunc('tablenam') ;
and i get
ERROR:  parser: parse error at or near "into"

ok this is actually first things last. I'm not really bothered about
returing values into local variables and then returning them, it's just
a run through. If I can't get this right, what chance have i got at
sorting out the real work i want to do. 

Example 2 :

create function update_trans (text, integer, text, text, text, text,
text) returns boolean as '
declare
  tbl alias for $1 ;
begin
  execute ''insert into tbl (objid, objtbl, et, event, time, reason,
owner) values ($2, $3, $4, $5, now(), $6, $7)'';
  return 0;
end;
' language 'plpgsql' ;


# select update_trans('tablname','1' 
,'sometext','sometext','sometext','sometext','sometext') ;
ERROR:  Relation 'tbl' does not exist

dur. yeah i know it doesn't exist cause i want to pass it in parameter
1.  Tried substituting tbl with $1 and quote_ident($1) and
quote_ident(tbl) in the sql string, but that didn't work either. (BTW
anyone know of any GUI interfaces that support 7.1 - phpPgAdmin 2.1,
2.2.1 and 2.3 seem to balk on functions)

Example 2 is prelude to a larger function (not much larger - but then
this is relavitve to how easy to code it is) to monitor the changes made
by a user, what they change from and to and who/when/why this is
already implemented in my app code - PHP - and checking out the features
available in postgres i figured i could do some kind of looping through
the OLD and NEW dataset-array things, comparing them against each other,
sorta like this :

for ($i = 0 ; $i < count($NEW) ; $i++) {
/* since $NEW and $OLD are essentially the same we can do this */
  if ($OLD[$i] != $NEW[$i])
record the change bla bla bla

}
I'm really hoping I can, as at this rate I've spent the better part of
three days trying to figure the simple things above out and the only
thing i'm about to reach is breaking point...

Sorry for the sarcasm, I'm about to pop.

Rob



[SQL] Re: plpgsql grief

2001-02-12 Thread rob


> 
> I feel your pain;^)
> 
> Here is the text of a post from Tuesday... I think it answers your question
> which is that you cannot do variable subsititution for table or field names
> inside procedures.  This is not a Postgres specific limitation, MS SQL
> Server has the same issue.
> 
> > >>>>> "DR" == David Richter
> >  writes:

Ya know, i already read this one. That's what got me on to 7.1 and using
EXECUTE :)

ARgh! Apparenty I've opened up a real can of worms with wanting cool
general functions, with a bit of dynamism and business logic.
However

OK dumped pl/pgsql, thinking pl/tcl is more my bag (didn't fancy
recompiling perl to get the shared lib, and didn't want to waste much
time struggling to somehow see if PHP could be used, since PHP is my
current 'main' lang). Pl/tcl supports dynamic queries - great !. However
it introduced it's own little wrinkles. Now for starters I've no
knowledge of tcl, however doing my job means learning loads of
exteranous crap, and what another lang... I reckon I can do it, just
need a little help. So here goes.

found out some things too - trigger functions must return opaque (ok not
100% on what opaque is, but I'm not worrying about that just yet), also
can't have parameters in the function name - odd, but this lead on to -
how the hell would you pass the parameters to the func anyway if it's
attached to a trigger - like INSERT - and you merely type the SQL :

  insert into tablename values bla blabla ;

and invoke the trigger, which inturn invokes the function ... erm there
- wheres the transport to passing the parameters (sorry i know my
programmers lingo isn't 100%).

here's my tcl'ed function, which i attached to my main table on INSERT.
It's supposed to log the action to another table passed to it - the idea
is i've generalised some logging tables to various 'main' tables, and
these functions are supposed to record various db and system level
events. The function represents a bit of copying and playing - duno what
TPL is however it works. I'll address further issues, error checking,
processing etc when I've got it actually doing something :

create function update_trans () returns opaque as '
  spi_exec -array TPL "insert into $1 (objid, objtbl, et, event, time,
reason, owner) values (\'$2\', \'$3\', \'$4\', \'$5\', now(), \'$6\',
\'$7\')"
' language 'pltcl' ;

which works !! well, gets invoked however it doesn't know what 'text'
is. Not sure where that got picked up from, but it obviously didn't work
- however the insert did. Now I found out there's a parameter array
tgargs or something, but how does this get set ? How does it then get
accessed in the function ?

OK, being the resourceful chap I am (relatively) how about this, a
slight diversion. Why not just make up some function which do the main
insert, business logic, event logging stuff explicity and sack off doing
sql inserts/update etcs.

(can't return opaque here, but that's no bother, i think (hope))

New function

create function update_trans (text, int4, text, text, text, text, text)
returns boolean as '
  spi_exec -array TPL "insert into $1 (objid, objtbl, et, event, time,
reason, owner) values (\'$2\', \'$3\', \'$4\', \'$5\', now(), \'$6\',
\'$7\')"
  return1
' language 'pltcl' ;

then call these from my code like

select update_trans (bla, 1, bla, bla blabl) ;

which works also. I get to pass all the parameters i want, and have full
control over execution. OK this looses part of the reason for doing this
in the first place - tracking people who side track the app code by
modifying the db directly, however since noone should be doing that
anyway, no problem. (note should). 

Again apologies for the verbose message - i feel the 'fuller picture' is
more useful in the long run, rather than diconnected questions. Well it
is to me when I'm on your side of the fence.

Thanks for the reply BTW. Oh, and why is this news group only accessible
during late afternoon from 3pm'ish GMT. I'm access it from the UK. All
morning, for two days, I couldn't get on - server busy errors.

Oh and before I forget - over several months of news group postings
there has been the recognision of the need for examples for us newies,
and some mention of people compiling various docs for just such a
purpose - anyone get anywhere with any of these, as they were several
months ago. I'm certainly gaining some real gotcha type info on all of
this :)

Now I've had it. Burned out. So off to the pub and henceforth become as
drunk as a skunk !

Regards

Rob



Re: [SQL] Rank

2004-05-04 Thread Rob
Martin Knipper wrote:
Am 04.05.2004 16:11 schrieb george young:
On Sun, 2 May 2004 02:22:37 +0800
"Muhyiddin A.M Hayat" <[EMAIL PROTECTED]> threw this fish to the penguins:

I Have below table
id | site_name | point
+---+---
1 | Site A|40
2 | Site B|90
3 | Site D|22
4 | Site X|98
Would like to calc that Rank for each site, and look like
id | site_name | point | rank
+---+---+--
1 | Site A|40 |3
2 | Site B|90 |2
3 | Site D|22 |4
4 | Site X|98 |1

Well, a simple minded solution would be:
select id,site_name,point,(select count(*)from mytable t2 
   where t2.point >= t1.point) as rank from mytable t1;

id | site_name | point | rank 
+---+---+--
 4 | Site X|98 |1
 2 | Site B|90 |2
 1 | Site A|40 |3
 3 | Site D|22 |4
(4 rows)

If mytable is huge this may be prohibitively slow, but it's worth a try.
There's probably a self join that would be faster.  Hmm... in fact:
select t1.id,t1.site_name,t1.point,count(t2.point) as rank from mytable t1,mytable t2
   where t2.point >=t1.point group by t1.id,t1.site_name,t1.point;
id | site_name | point | rank 
+---+---+--
 3 | Site D|22 |4
 2 | Site B|90 |2
 4 | Site X|98 |1
 1 | Site A|40 |3
(4 rows)

-- George Young

Another possibilty is to use a sequence:
demo=# create temporary sequence ranking;
demo=# select *,nextval('ranking') as rank from yourTable order by
site_name asc;
Greetins,
Martin
wouldn't it have to be:
select *, nextval('ranking') as rank
  from yourTable
 order by point desc;
for the ranking to work?

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Equivalant of SQL Server's Nchar and NVARCHAR

2004-04-29 Thread Rob
kumar wrote:
Dear friends,
Is there any equivalent datatype of SQL Server's NCHAR and NVARCHAR, 
available with Postgres 7.3.4. I want to store characters, special 
characters and Numbers.
Please shed some light.
 
Thanks
Kumar
I only only use ascii but...
I believe postgresql varchar is the same as Oracle/Sybase/MS SQL 
nvarchar even though it doesn't explicitly say so here:

http://www.postgresql.org/docs/7.4/static/datatype-character.html
Here is a short example:
create table
nvctest (
utf8fld varchar(12)
);
insert into nvctest
select convert('PostgreSQL' using ascii_to_utf_8);
select * from nvctest;
text functions including encoding conversions are here:
http://www.postgresql.org/docs/current/static/functions-string.html
 and
http://www.postgresql.org/docs/current/static/functions-string.html#CONVERSION-NAMES
Good Luck,
Rob
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] Sql Query help: Remove Sub Selects

2005-07-13 Thread Rob
Hi Gang,

I'm trying to optimize a query:

This query below returns something like this:

 event_id | eu_tid |  event_name   |   event_when   | day | mon | 
start   |   end|event_users |   
contact_phone| contact_pager | num_opps
--++---++-+-+--+--++-+---+--
   196651 |  1 | Show Event Type Color | Jul 06th, 2005 | 06  | 07  |
04:27 pm | 05:27 pm | {"Mickey Mouse","Donal Duck"}  |
{555-555-,555-555-} | {}|0
   203651 |  1 | Schedule Appt.| Jul 08th, 2005 | 08  | 07  |
02:35 pm | 03:35 pm | {"George Bush","Bill Clinton"} | {}   
  
| {}|0

Instead of doing multiple selects using array_accum, I would like to use an
INNER JOIN

I'm at a stand-still. I just can't seem to get anything rolling.

Any help is greatly appreciated.

-- Start Query

SELECT eu.event_id,
   eu.eu_tid,
   e.event_name,
   to_char(e.event_when, 'Mon DDth, ') AS event_when,
   to_char(e.event_when, 'DD') AS day,
   to_char(e.event_when, 'MM') AS mon,
   to_char(e.event_when, 'HH:MI am') AS start,
   to_char((e.event_when + e.duration), 'HH:MI am') AS end,

   (SELECT array_accum(get_username(eu2.user_id)) FROM event_users eu2
inner join user_table ut9 ON ut9.user_id = eu2.user_id
WHERE eu2.event_id = e.event_id AND eu2.user_id != 4223651) AS
event_users,

(SELECT array_accum(ut.phone_nbr) FROM event_users eu2 INNER JOIN
user_table ut ON ut.user_id = eu2.user_id
WHERE eu2.event_id = e.event_id AND eu2.user_id != 4223651) AS
contact_phone,

(SELECT array_accum(ut.pager_ph_nbr) FROM event_users eu2 INNER JOIN
user_table ut ON ut.user_id = eu2.user_id
WHERE eu2.event_id = e.event_id AND eu2.user_id != 4223651) AS
contact_pager,

   (SELECT count(*) FROM opp_events AS oe WHERE oe.event_id =
e.event_id) AS num_opps
  FROM events AS e
   INNER JOIN event_users AS eu
   ON eu.event_id = e.event_id
  AND eu.user_id = 4223651
LEFT JOIN event_repeats er
   ON er.event_id = e.event_id
 WHERE e.event_when BETWEEN '2005-07-03 0:00:00' AND '2005-07-09 23:59:59'
   AND e.status != 0 AND er.repeat_type IS NULL
 ORDER BY e.event_when ASC

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Storing HTML in table

2005-11-26 Thread Rob
Hi:

I need to store markup in a database to control how the browser renders the
page information.

I would like complete control over this information -- so if sometime in the
future it's decided to totally redesign the layout. Also, at some point a
tool will be created so novice computer users can enter nicely formatted
markup -- meaning you won't have to know HTML to use this tool.

In it's raw form, data will be stored like this:

some header goes here

more text here

A few ideas are to:
 * Use css
Store the data like: text here


 * Store in XML and XSLT

 * Don't use HTML -- but something that will parse keys into HTML. 
E.G. [:bold] or [:img]

But what happens if you have a single item that's [:bold],  or etc. and need to take away the bold and change the
font size? It just seem that there is an easy way of doing this that I'm
not thinking of. 

Does anyone recommend a certain way for storing HTML in a table?

Thanks,

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] Help with displaying data types.

2000-11-23 Thread Rob Burne

I am trying to perform a query that will:

1. Return all user defined attributes from a relation.

2. Also return the datatypes of each relation.

So far I can only achieve part 1 with the following:

select attname from pg_class,pg_attribute
where relname=relation_name and
attrelid = pg_class.oid and
atttypid != 26 and
atttypid != 27 and
atttypid != 28 and
atttypid != 29;

But what do I need to add to return the datatype of each attribute. Any 
suggestions please?

Regards,

Rob.
_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at 
http://profiles.msn.com.




[SQL] Access Permissions/Security

2000-12-03 Thread Rob Burne

Hi,

I need to find the query that will return all security access permissions.

For example if a INSERT has been granted to a particular user, how do I 
perform the query which will return this result. I know that the \z command 
returns this, but I need the raw SQL to do it - any ideas please?

Many thanks,

Rob.
_
Get more from the Web.  FREE MSN Explorer download : http://explorer.msn.com




[GENERAL] Re: [SQL] Permissons on database

2001-03-12 Thread Rob Arnold

Are you looking for "grant all to public" or "grant select to public"?


- Original Message -
From: "Roland Roberts" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Wednesday, March 07, 2001 3:40 PM
Subject: Re: [SQL] Permissons on database


> > "bk" == Boulat Khakimov <[EMAIL PROTECTED]> writes:
>
> bk> How do I grant permissions on everything in the selected
> bk> databes?
>
> bk> GRANT doesnt take as on object database name nor does it
> bk> accept wild chars
>
> Attached is some Perl code I wrote long ago to do this.  This
> particular code was done for Keystone, a problem tracking database and
> it would do a "GRANT ALL".  Modify it as needed.  Last I checked it
> worked with both PostgreSQL 6.5.x and 7.0.x
>
>






>
> roland
> --
>PGP Key ID: 66 BC 3B CD
> Roland B. Roberts, PhD RL Enterprises
> [EMAIL PROTECTED] 76-15 113th Street, Apt 3B
> [EMAIL PROTECTED]  Forest Hills, NY 11375
>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] How does postgres handle non literal string values

2002-11-26 Thread Rob Hills
Hi,

On 25 Nov 2002 at 4:57, javaholic wrote:

Your problem is really a JSP one rather than a postgres problem, and probably doesn't 
really belong on this list.  That said, I know much more java/jsp than I do postgres, 
so I'll 
try and help.

> I have some jsp code that should insert a user name and password into
> a table called login.
> 
> Instead of inserting the values given by the client, it insert the
> literal string 'username' and 'password. The problem is somewhere in
> the INSERT statement.

Yup, your INSERT statement is doing exactly what you've asked it to do, inserting the 
literal strings 'username' and 'password' into the table.

> <%
> String insertString =
> "INSERT INTO  \"login\" (\'user\', \'password\')
> VALUES ('username', 'password')";
> %>

To do it correctly using JSP, try the following:

<% 
String insertString = "INSERT INTO  \"login\" (\'user\', \'password\')  
VALUES ('" + username + "', '" + password + "')"; 
%>  

However, you would probably be better off using a PreparedStatement object rather 
than a Statement for various reasons, but especially to avoid trying to get the 
single- 
and double-quotes right in the above statement.

HTH,


Rob Hills
MBBS, Grad Dip Com Stud, MACS
Senior Consultant
Netpaver Web Solutions
Tel:(0412) 904 357
Fax:(08) 9485 2555
Email:  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] searching cidr/inet arrays

2005-04-25 Thread Rob Casson
i'm having trouble figuring out how to search in inet arraysits
been a long time since i used postgres array support, so i may just be
bone-headed..

how can i determine if a given ip address is contained in the subnet
declaration inside an array?

{134.53.25.0/24,134.53.0.0/16}
{134.53.24.0/24}

i'd like to see which rows match an ip of, say, 134.53.24.2.

thanks in advance, and sorry if this is a faqi've googled
site:archives.postgresql.org, but haven't found my solution.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] 'select where' using multiple columns.

2005-08-05 Thread Rob Kirkbride

Hi,

I've googled around for this but can't see a decent way of doing this :

I've got a persons name which is being stored say in 3 columns :-  
Title, Forename_1, Forename_2, Surname. I want to allow a search say for 
'John Smith'. Problem is I can't just break it up into forename and 
surname because I won't also know.

Is there a way to do something like a
'select * where forename_1,forename_2,surname like '%String%'   ??

Thanks for any help. Apologies if its a FAQ.

Rob


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] 'select where' using multiple columns.

2005-08-05 Thread Rob Kirkbride

Ian Johannesen wrote:


Hi.

Rob Kirkbride wrote:
 

I've got a persons name which is being stored say in 3 columns :- 
Title, Forename_1, Forename_2, Surname. I want to allow a search say for

'John Smith'. Problem is I can't just break it up into forename and
surname because I won't also know.
Is there a way to do something like a
'select * where forename_1,forename_2,surname like '%String%'   ??

   


SELECT * FROM table WHERE forename_1 || ' ' || forename_2 || ' ' ||
surname LIKE '%String%';

 

Thanks for the quick response. I've tried that and it works fine. Thanks 
a lot and thanks to Daq and Helder.


Rob

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Type inheritance

2009-06-06 Thread Rob Sargent
On Sat, Jun 6, 2009 at 8:30 AM, Richard Broersma  wrote:

> On Sat, Jun 6, 2009 at 12:10 AM, Gianvito Pio
> wrote:
>
> >  That value doesn't have to be fixed, but I want to define it in a
> > way that it changes its structure when the sensor type changes. For
> > example, for Temperature sensor I would like to have just a float number,
> > but for the humidity I could need of absolute and relative humidity
> > (2 floats). I'd just like to know if I can say that the VALUE is a
> composite
> > type...that I have then to specialize in Temp Type and Humidity
> type...using
> > them in their particular case.
>
> I think that best solution for what you want to achieve is to design
> you own vertically partitioned table hierarchy.   PostgreSQL's table
> inheritance isn't going to allow that relationships that you may want
> between the sub-types.
>
> Another PostgreSQL specific feature that allows for hierarchical data
> is the contrib module h-store.  Its kind-of like EAV for a column
> instead of a table.



Are we storing sensor types (name, etc, sensor-report) where the report
column has to be polymorphic or are we storing instances of sensor results
(sensor-id, sensor-report).  For the former a single table with an array of
number in the report column might work, for the latter it seems that
separate type specific tables extending/dependent to a table of sensors
could do the trick.  What you're using to access the store might also affect
the design.


[SQL] 'cost' and 'rows' for volitile function

2009-06-08 Thread Rob Sargent
pg-admin is showing 'COST 100' and 'ROWS 1000' for my explicitly 
VOLATILE functions.  Is one hundred the new ninety-nine? and therefore 
these values are ignored by the planner?




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Taking the cache out of the equation?

2009-06-09 Thread Rob Sargent
Caching helps a *lot* and I'm thankful for that but I would like to take 
it out of the picture as I massage my queries for better performance.  
Naturally the first invocation of the query cannot take advantage of the 
cache and these queries would normally only be called once for the same 
target data.What tricks are there to flush, ignore, circumvent the 
caching boost?  (Especially in the production environment.)


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] setting the where clause

2009-06-10 Thread Rob Sargent


Tom Lane wrote:

johnf  writes:
  
I'm am programming in python using the Dabo modules.  www.dabodev.com if your 
interested.  Dabo is a framework that provides an easy way to build desktop 
app's.  To clear a data entry form. I have been setting the where clause 
to "where 1=0".  This of course retrieves 0 records and my form will display 
nothing.  It has been suggested that it might be better to set the where 
clause to a primary key value that does not exist in the table  "where PK 
= -999".   



Whoever said that doesn't know what they're talking about.  It's not
faster (because it forces an actual index probe to happen) and it's not
safer (what if one day that PK value exists?).

I'd personally go with "where false"; why not say what you mean rather
than forcing humans and computers to deduce that the condition is
constant false?

regards, tom lane

  


Of course you would do the world a favour if you wrote a client-side 
clear-the-screen routine that didn't pester the net and the server 
un-necessarily.


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Avoiding "will create implicit index" NOTICE

2009-06-11 Thread Rob Sargent

A. Kretschmer wrote:

In response to Bryce Nesbitt :
  

Hmm, no.  I still get the NOTICE. How can I create the primary key
without triggering a NOTICE?



Sure, set client_min_messages='...'

test=*# create table bla(id int primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "bla_pkey" for table 
"bla"
CREATE TABLE
test=*# rollback;
ROLLBACK
test=# set client_min_messages='warning';
SET
test=*# create table bla(id int primary key);
CREATE TABLE
test=*#


Andreas
  

Funny thing is I'm using set client_min_message in my own scripts!

Also was assuming OP was in a scripting/temp-table mode and that the 
notion of explicit primary key (vs. simply stating the index) isn't of 
great value in that realm I don't think.



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] polymorphic function in 7.4 vs. 8.3

2009-06-11 Thread Rob Sargent

Richard Rosenberg wrote:

On Thursday 11 June 2009 14:49:46 Tom Lane wrote:

  

Sure you can't move the DB off 7.4?  There would be pretty considerable
benefits from adopting some recent release instead.

regards, tom lane



Don't I know it. I am SOL as the machine is hosted/shared out by an external 
provider. I can do it by getting rid of the polymorphism - breaking the 
columns into separate args - as you say:
  
Hmm.  Some subliminal coercion seems in order here.  "Gee I hope it 
doesn't leek out that  is so poorly run that 
they're stuck on postgres 7.4 with no hope of ever catching up with the 
rest of the world"



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] left outer join on more than 2 tables?

2009-06-16 Thread Rob Sargent

Richard Broersma wrote:

On Tue, Jun 16, 2009 at 1:59 PM, Carol
Cheung wrote:
  

I would like to find the counts of complaints by region and I would like all
regions to be displayed, regardless of whether or not complaints exist for
that region. Is left outer join what I'm looking for?



   SELECT R.region_name, Count(*) AS RegionComplaints
 FROM Region AS R
LEFT JOIN City AS Ci
   ON R.id = C.region_id
LEFT JOIN Complaint AS Cm
   ON Ci.id = Cm.city_id
 GROUP BY R.region_name;


  
Is there a city without a reference to region?  And wouldn't you want to 
count(cm.id)?


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] left outer join on more than 2 tables?

2009-06-16 Thread Rob Sargent

Richard Broersma wrote:

On Tue, Jun 16, 2009 at 2:40 PM, Rob Sargent wrote:

  

Is there a city without a reference to region?



I don't know, but the OP wanted to know complaints by region.

  

I didn't try this, but with regionless cities, you may need a full join
if you want a complete accounting of all complaints, some being logged
to the null region.





  

 And wouldn't you want to count(cm.id)?



Count(cm.id) and Count(*) produce the same result.  But I like
Count(*) more since it helps to correctly express the idea that we are
counting rows per group and not cm.id(s) per group.



  


"Same result" is not true.  I loaded tables. 

Using count(*) you get count=1 for regions without complaints.  Using 
count(complaint.id) you get count = 0.  (The deference amount to 
counting the left hand side (region) vs the right hand side (complaint) 
which I believe is what OP is after).




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 2 tables or two db?

2009-06-18 Thread Rob Sargent

ivan marchesini wrote:

Dear postgres users,

Suppose I have two tables of data.

suppose the two table are really similar in dimensions 


suppose they will receive the same number and type of queries.

in tems of performance (velocity of answer) it is better to place the
two tables in the same db or into two different db?

may be it is a stupid question, but

many many thanks...

 Ivan




  


My guess is that only physical separation will have any positive impact 
otherwise the two datasets are competing for the same resources.  At 
least separate discs, controllers to avoid disc contention.  Separate 
postgres instances to allow separate caches (in affect separate memory).


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Client-side compression

2009-06-23 Thread Rob Sargent


Not sure if this belongs here or on the admin or performance list.  
Apologies if so. (And this may be a second posting as the first was from 
an un-registered account.  Further apologies)


My assumption is that any de/compression done by postgres would be 
server-side.


We're considering minimizing bandwidth utilization by using client-side 
compression on a column value that will typically be multi-megabyte in 
size.  We would use ALTER TABLE SET STORAGE EXTERNAL to prevent the 
server from un-necessary compression.


Is this generally worthwhile?  I haven't found any thread on the subject 
of client-side compress so any pointer more than welcome.


Is there a great penalty for a query which delves into the value, given 
that the server will not be aware it's compressed?  I assume we're 
pretty much on our own to prevent such actions (i.e. the app can never 
query against this column via sql).




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Bucketing Row Data in columns

2009-06-25 Thread Rob Sargent
I would be suspicious of this sort of solution of turning rows into 
columns by mean of a series of correlated sub-selects.  Once the data 
set gets large and the number of columns goes over 2 or 3 this will in 
all likelihood not perform well. 
I had the pleasure of re-writing a "report" which was based on count() 
(similar to sum()) per user_id with the counts going into various 
columns per user.  18000 users, a dozen columns from table of 2 million 
rows, report took >1,000,000 seconds (yes almost 12 days) to complete.  
Re-write runs in 5-10 minutes (now at 10M rows) by getting the counts as 
rows (user, item, count) into a temp table and making the columns from 
the temp table (pl/psql)  Getting the counts takes half the time, making 
the flattened report takes half the time.





Oliveiros Cristina wrote:

Hello, Sandeep,
 
I am not sure if this is what you want.
 
I came up with this query
 
SELECT *
FROM (SELECT user_id,SUM(amount) as day1 FROM t1 WHERE 
create_timestamp = '2009-1-1' GROUP BY "user_id") a
NATURAL FULL JOIN (SELECT user_id,SUM(amount) as day2 FROM t1 WHERE 
create_timestamp = '2009-1-2' GROUP BY "user_id") b
NATURAL FULL JOIN (SELECT user_id,SUM(amount) as day3 FROM t1 WHERE 
create_timestamp = '2009-1-3' GROUP BY "user_id") c
 
The solution is not totally correct because it returns NULL in the 
places you return 0.

It seems the SUM() returns NULL when it gets an all NULL column...
Is it problematic for your application ?
 
Also, i 'm not sure if I fully understand your last sentence
/lets assume the buckets are fixed i.e 3 only. but I wish to get them 
unlimited i.e day 1 to day 20./
 
You say that the buckets are fixed at 3. So, you mean the table output 
will always have 4 columns? 3 days plus one for user_id ?

If you want 20 buckets it must be a different query...
 
Could you please clarify what you mean when you say that you want to 
get a bucket unlimited ?
 
Best,

Oliveiros

- Original Message -
 
*From:* Sandeep 

*To:* pgsql-sql@postgresql.org 
*Sent:* Wednesday, June 24, 2009 5:39 PM
*Subject:* [SQL] Bucketing Row Data in columns

Hi all,
I need help on creating a sql, not a problem even if its pl/sql

I have orders table schema is as follow

orders(order_id,user_id, create_timestamp, amount)

and I want to generate a report like
for the past 3 days bucketing purchases i.e SUM(amount) every day
in columns
i.e result will be having these columns.

(user_id, amount_day1, amount_day2, amount_day3)

ex:
am leaving order_id assume they are auto incrementing and unique,
date format dd/mm/
(user_id, create_timestamp, amount)
(user1, 01/01/2009,100)
(user1, 01/01/2009,100)
(user2, 01/01/2009,100)
(user2, 02/01/2009,100)
(user2, 02/01/2009,100)
(user1, 02/01/2009,100)
(user2, 03/01/2009,100)
(user2, 03/01/2009,100)
(user3, 03/01/2009,100)


result

(user_id, amount_day1, amount_day2, amount_day3)
(user1, 200, 200, 0)
(user2, 100, 200, 200)
(user3, 0, 0, 100)


hope you guys got what I am trying to generate through sql.

I could get this data in each row, but I want it in columns.
Can anyone help me on this? lets assume the buckets are fixed i.e
3 only. but I wish to get them unlimited i.e day 1 to day 20.

Regards
Sandeep Bandela




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] it's not NULL, then what is it?

2009-06-30 Thread Rob Sargent

Tena Sakai wrote:


Hi Everybody,

I have a table called gallo.sds_seq_reg_shw,
which is like:

  canon=# \d gallo.sds_seq_reg_shw
   Table "gallo.sds_seq_reg_shw"
Column  |  Type   | Modifiers
  --+-+---
   name | text|
   response | text|
   n| integer |
   source   | text|
   test | text|
   ref  | text|
   value| real|
   pvalue.term  | real|
   stars.term   | text|
   gtclass.test | text|
   fclass.test  | text|
   gtclass.ref  | text|
   fclass.ref   | text|
   markerid | integer |
   maf  | real|
   chromosome   | text|
   physicalposition | integer |
   id   | text|
   ctrast   | text|
 
I am intereseted in the column maf (which is real):


  canon=# select maf from gallo.sds_seq_reg_shw
  canon-#  order by maf asc;
   maf
  -

   0.000659631
   0.000659631
   0.000659631
   0.000659631
.
  (trunacated for the interest of breivity)
.

Another way to look at this column is:

  canon=# select maf from gallo.sds_seq_reg_shw
  canon-#  order by maf desc;
   maf
  -
 
 
 
.

  (trunacated for the interest of breivity)
.

These rows shown are blanks, as far as I can tell.
But...

  canon=# select count(maf) from gallo.sds_seq_reg_shw;
   count
  ---
   67284
  (1 row)
 
  canon=# select count(maf) from gallo.sds_seq_reg_shw

  canon-#  where maf ISNULL;
   count
  ---
   0
  (1 row)
 
  canon=#

  canon=# select count(maf) from gallo.sds_seq_reg_shw
  canon-#  where maf NOTNULL;
   count
  ---
   67284
  (1 row)

My confusion is that if they are real and not null,
what are they?  How would I construct a query to do
something like:

 select count(maf)
   from gallo.sds_seq_reg_shw
  where maf ISBLANK;

Thank you in advance.

Regards,

Tena Sakai
tsa...@gallo.ucsf.edu


Maybe something like

   select ']' || maf::text || '[' -- just to see where the value
   start/stops

or

   select length(maf::text)

but I suspect you're getting NAN or something unprintable in your 
environment?


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] it's not NULL, then what is it?

2009-07-01 Thread Rob Sargent

So they were null, and null turns out to be a seven-character blank string!?

Btw, you can change the displayed value of null with
   \pset null nil
and you will seem 4+ million 'nil's in your output


Tena Sakai wrote:


Hi Osvaldo,

> Try:
> SELECT count(*) FROM gallo.sds_seq_reg_shw;
> SELECT count(*) FROM gallo.sds_seq_reg_shw WHERE maf IS NULL;
> SELECT count(*) FROM gallo.sds_seq_reg_shw WHERE maf IS NOT NULL;

> Don't use count(maf), use count(*).

Indeed!

  canon=# SELECT count(*) FROM gallo.sds_seq_reg_shw;
count 
  -

   4645647
  (1 row)

  canon=# SELECT count(*) FROM gallo.sds_seq_reg_shw WHERE maf IS NULL;
count 
  -

   4578363
  (1 row)

  canon=# SELECT count(*) FROM gallo.sds_seq_reg_shw WHERE maf IS NOT 
NULL;

   count
  ---
   67284
  (1 row)

$ dc
4578363 67284 + p q
4645647
$

Many thanks, Osvald.

Regards,

Tena Sakai
tsa...@gallo.ucsf.edu




-Original Message-
From: Osvaldo Kussama [mailto:osvaldo.kuss...@gmail.com]
Sent: Tue 6/30/2009 6:49 PM
To: Tena Sakai
Subject: Re: [SQL] it's not NULL, then what is it?

2009/6/30 Tena Sakai :
> Hi Everybody,
>
> I have a table called gallo.sds_seq_reg_shw,
> which is like:
>
>   canon=# \d gallo.sds_seq_reg_shw
>Table "gallo.sds_seq_reg_shw"
> Column  |  Type   | Modifiers
>   --+-+---
>name | text|
>response | text|
>n| integer |
>source   | text|
>test | text|
>ref  | text|
>value| real|
>pvalue.term  | real|
>stars.term   | text|
>gtclass.test | text|
>fclass.test  | text|
>gtclass.ref  | text|
>fclass.ref   | text|
>markerid | integer |
>maf  | real|
>chromosome   | text|
>physicalposition | integer |
>id   | text|
>ctrast   | text|
>
> I am intereseted in the column maf (which is real):
>
>   canon=# select maf from gallo.sds_seq_reg_shw
>   canon-#  order by maf asc;
>maf
>   -
>0.000659631
>0.000659631
>0.000659631
>0.000659631
> .
>   (trunacated for the interest of breivity)
> .
>
> Another way to look at this column is:
>
>   canon=# select maf from gallo.sds_seq_reg_shw
>   canon-#  order by maf desc;
>maf
>   -
>
>
>
> .
>   (trunacated for the interest of breivity)
> .
>
> These rows shown are blanks, as far as I can tell.
> But...
>
>   canon=# select count(maf) from gallo.sds_seq_reg_shw;
>count
>   ---
>67284
>   (1 row)
>
>   canon=# select count(maf) from gallo.sds_seq_reg_shw
>   canon-#  where maf ISNULL;
>count
>   ---
>0
>   (1 row)
>
>   canon=#
>   canon=# select count(maf) from gallo.sds_seq_reg_shw
>   canon-#  where maf NOTNULL;
>count
>   ---
>67284
>   (1 row)
>
> My confusion is that if they are real and not null,
> what are they?  How would I construct a query to do
> something like:
>
>  select count(maf)
>from gallo.sds_seq_reg_shw
>   where maf ISBLANK;
>


Try:
SELECT count(*) FROM gallo.sds_seq_reg_shw;
SELECT count(*) FROM gallo.sds_seq_reg_shw WHERE maf IS NULL;
SELECT count(*) FROM gallo.sds_seq_reg_shw WHERE maf IS NOT NULL;

Don't use count(maf), use count(*).

Osvaldo




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Moving text columns, when it actually is large

2009-07-08 Thread Rob Sargent
I have to restructure some tables, coalescing common elements from three 
tables (sub-classes) into a single table (super-class).  Each source 
table has a text field which actually gets stuffed with a largish (1Mb+) 
blob of xml.


Is there any way to simply, um, er, transplant the pointer rather than 
actually copy the data?


Ever hopeful,
rjs

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Order of operations in ascii vs archive format (moderately urgent)

2009-07-21 Thread Rob Sargent
Is there a difference in the order of execution between an ascii dump 
and one using the "custom" format?  Or any difference in the general 
operation? 

I need to know if I can rely on the ascii version to tell me what the 
custom format might have done.




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Order of operations in ascii vs archive format (moderately urgent)

2009-07-21 Thread Rob Sargent

Wonderful news.

I just ran dump and restore against same production server.  Constraints 
and the absence of drop  calls appears to have saved my butt. 



Tom Lane wrote:

Rob Sargent  writes:
  
Is there a difference in the order of execution between an ascii dump 
and one using the "custom" format?  Or any difference in the general 
operation? 



There's not supposed to be.  One standard test on the pg_dump code is
pg_dump >textfile
pg_dump -Fc >dumpfile
pg_restore dumpfile >textfile2
diff textfile textfile2
If these don't produce the same results something is broken.

regards, tom lane
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Order of operations in ascii vs archive format (moderately urgent)

2009-07-21 Thread Rob Sargent
The ascii dump has serveral CREATE FUNCTION gbtreeN_{in,out} but I don't 
see them in the current (source) database using '\df gbtree*'.  Using 
'\df gbt*' I get 111 functions for which all the names begin 'gbt_'.  
Have I lost them? The gbtreekeyN types are still there. 


Tom Lane wrote:

Rob Sargent  writes:
  
Is there a difference in the order of execution between an ascii dump 
and one using the "custom" format?  Or any difference in the general 
operation? 



There's not supposed to be.  One standard test on the pg_dump code is
pg_dump >textfile
pg_dump -Fc >dumpfile
pg_restore dumpfile >textfile2
diff textfile textfile2
If these don't produce the same results something is broken.

regards, tom lane
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Detect referential integrity structure

2009-07-28 Thread Rob Sargent

Perhaps another option:

Alter the references to ON DELETE CASCADE as seen here 




Akos Gabriel wrote:

Hi,

I've a big/complex database (Adempiere - www.adempiere.org ) where I'd
like to delete some rows from some tables (delete a client and its
data from the ERP database).
All tables are in one schema (adempiere).
There are some foreign keys/constraints.
I've following options:

- disable all constraints, doing the delete, enable all constraints.
- find out the "right sequence" and do the delete -s.

Does anyone have such an algorythm, or a method of disabling all the
constraints? We have 8.3.7 right now. The solution will be published
into the Adempiere wiki / contributed to the project as well.

Thanks in advance,
Akos Gabriel

  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] SQL report

2009-07-30 Thread Rob Sargent
I would be curious to know the performance curve for let's say 20K, 40K 
, 60K, 80K, 100K records.  And what sort of indexing you have, whether 
or not it's clustered, re-built and so on.


One could envision partitioning the status table such that recent 
records were grouped together (on the assumption that they will be most 
frequently "reported").


wkipj...@gmail.com wrote:

I have the following senario.

I have a tracking system. The system will record the status of an 
object regularly, all the status records are stored in one table. And 
it will keep a history of maximum 1000 status record for each object 
it tracks. The maximum objects the system will track is 100,000. Which 
means I will potentially have a table size of 100 million records.


I have to generate a report on the latest status of all objects being 
tracked at a particular point in time, and also I have to allow user 
to sort and filter on different columes in the status record displayed 
in the report.


The following is a brief description in the status record (they are 
not actual code)


ObjectRecord(
objectId bigint PrimaryKey
desc varchar
)

StatusRecord (
id bigint PrimaryKey
objectId bigint indexed
datetime bigint indexed
capacity double
reliability double
efficiency double
)

I have tried to do the following, it works very well with around 
20,000 objects. (The query return in less than 10s) But when I have 
100,000 objects it becomes very very slow. (I don't even have patience 
to wait for it to return I kill it after 30 mins)


select * from statusrecord s1 INNER JOIN ( SELECT objectId , 
MAX(datetime) AS msdt FROM statusrecord WHERE startDatetime <= 
1233897527657 GROUP BY objectId ) AS s2 ON ( s1.objectId = s2.objectId 
AND s1.datetime = s2.msdt ) where ( capacity < 10.0 ) order by 
s1.datetime DESC, s1.objectId DESC;


I did try to write a store procedure like below, for 100,000 objects 
and 1000 status records / object, it returns in around 30 mins.


CREATE OR REPLACE FUNCTION getStatus(pitvalue BIGINT) RETURNS SETOF 
statusrecord AS $BODY$

DECLARE
id VARCHAR;
status statusrecord%ROWTYPE;
BEGIN
FOR object IN SELECT * FROM objectRecord
LOOP
EXECUTE 'SELECT * FROM statusrecord WHERE objectId = ' || 
quote_literal(object.objectId) ||
' AND datetime <= ' || quote_literal(pitvalue) || ' ORDER BY datetime 
DESC'

INTO status;
IF FOUND THEN
RETURN NEXT status;
END IF;
END LOOP;
RETURN;
END
$BODY$ LANGUAGE plpgsql;

Just wanna to know if anyone have a different approach to my senario. 
Thanks alot.


John 


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Tweak sql result set... ?

2009-07-30 Thread Rob Sargent
I agree.  All clients issue the same sql and deal with it as they will.  
The psql client for example can format the results in various ways (pset 
variations etc).  Your client(s) need(s) to interpret their identical 
results differently.  Doesn't seem to me to be the job of SQL?


Jasmin Dizdarevic wrote:
i think jasen is thinking of manipulating the result set in your 
programming enviroment not in the database.
btw from the point of "clean programming" it's a bad idea to integrate 
html-elements directly into your database quereis. you're mixing data 
layer and design layer.
 
what do you mean with sometimes?

2009/7/29 Axe mailto:i...@axier.se>>


> > Any ideas on how to achieve this? I would like to let the
original sql
> > code stay original. I can prepare postgres before executing
the sql if
> > this makes it easier to acheive the goal
>
> Have a look at CREATE RULE.
>
> Tim
>
Ok, you mean I could create a rule for the table, then I let the
script go into my "black box",
do the original selects, but get a manipulated result set back, then I
drop the rule so
that the blackbox of scripts can get the not manipulated result set
back?

I need to sometimes get the result "output from query" and
sometimes not
and it would be awesomw to get it with the same query, but maybe by
setting
a rule or similar. Best from performance view would be to make
something
more permanent. Is it possible to activate/deactivate a rule?

Axe

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org
)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql




--
Mit freundlichen Grüßen

Dizdarevic Jasmin
Sonnenbergstr. 3
6714 Nüziders, AUT

jasmin.dizdare...@gmail.com 
+43 664 411 79 29



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] SQL report

2009-07-31 Thread Rob Sargent
Did you look at the query plans for the various record counts?  That 
might show which index is missing or misinformed :).  I wonder if 
clustering the status table on objectid would help?  This does then 
require maintenance so you might only load it at 75%. 


wkipj...@gmail.com wrote:

Hi Rob,

I have default B-Tree indexes created for each of the indexed columes 
and primary key columes. (No multiple columes indexe or NULL FIRST or 
DESC/ASC). I am using PostgreSQL 8.3 with the auto vacuum daemon on. I 
assume analyse will be automatically run to collect statistics for use 
by the planner and there is no maintainance for B-tree indexes once it 
is created. (Please point me out if I am wrong about this)


I will probably try to partition the status table to group more recent 
status records together to minimize the dataset I am querying.


Thx
John


On Jul 31, 2009 1:16am, Rob Sargent  wrote:
> I would be curious to know the performance curve for let's say 20K, 
40K , 60K, 80K, 100K records. And what sort of indexing you have, 
whether or not it's clustered, re-built and so on.

>
>
>
> One could envision partitioning the status table such that recent 
records were grouped together (on the assumption that they will be 
most frequently "reported").

>
>
>
> wkipj...@gmail.com wrote:
>
>
> I have the following senario.
>
>
>
> I have a tracking system. The system will record the status of an 
object regularly, all the status records are stored in one table. And 
it will keep a history of maximum 1000 status record for each object 
it tracks. The maximum objects the system will track is 100,000. Which 
means I will potentially have a table size of 100 million records.

>
>
>
> I have to generate a report on the latest status of all objects 
being tracked at a particular point in time, and also I have to allow 
user to sort and filter on different columes in the status record 
displayed in the report.

>
>
>
> The following is a brief description in the status record (they are 
not actual code)

>
>
>
> ObjectRecord(
>
> objectId bigint PrimaryKey
>
> desc varchar
>
> )
>
>
>
> StatusRecord (
>
> id bigint PrimaryKey
>
> objectId bigint indexed
>
> datetime bigint indexed
>
> capacity double
>
> reliability double
>
> efficiency double
>
> )
>
>
>
> I have tried to do the following, it works very well with around 
20,000 objects. (The query return in less than 10s) But when I have 
100,000 objects it becomes very very slow. (I don't even have patience 
to wait for it to return I kill it after 30 mins)

>
>
>
> select * from statusrecord s1 INNER JOIN ( SELECT objectId , 
MAX(datetime) AS msdt FROM statusrecord WHERE startDatetime

>
>
> I did try to write a store procedure like below, for 100,000 objects 
and 1000 status records / object, it returns in around 30 mins.

>
>
>
> CREATE OR REPLACE FUNCTION getStatus(pitvalue BIGINT) RETURNS SETOF 
statusrecord AS $BODY$

>
> DECLARE
>
> id VARCHAR;
>
> status statusrecord%ROWTYPE;
>
> BEGIN
>
> FOR object IN SELECT * FROM objectRecord
>
> LOOP
>
> EXECUTE 'SELECT * FROM statusrecord WHERE objectId = ' || 
quote_literal(object.objectId) ||

>
> ' AND datetime
> INTO status;
>
> IF FOUND THEN
>
> RETURN NEXT status;
>
> END IF;
>
> END LOOP;
>
> RETURN;
>
> END
>
> $BODY$ LANGUAGE plpgsql;
>
>
>
> Just wanna to know if anyone have a different approach to my 
senario. Thanks alot.

>
>
>
> John
>
> 


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Create table command fails with permission denied

2009-08-03 Thread Rob Sargent
Looks to me as though you are not the owner of the schema nor superuser 
nor in a role with permission to create tables in said schema. See the 
DBA if it's not you. If it is sign on as postgres (superuser) and grant 
yourself some access rights.




Venkateswara Rao Bondada wrote:


Hi,

I’m new to PostgreSQL, and currently facing an issue with PostgreSQL 
7.4 database. I’m getting the following error when tried to create a 
table. Please let me know the steps (with queries) that I should take 
care to resolve this issue.


cms=# create table test(id character varying(80));

ERROR: could not create relation "test": Permission denied

Thanks in advance,

Venkat



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Create table command fails with permission denied

2009-08-04 Thread Rob Sargent

Moderately curious.

I wonder if 'cms' is on a different tablespace? Are there tables in that db?

Btw, I don't have 7.4 so I'm only guessing based on 8.3

rjs



Venkateswara Rao Bondada wrote:

Hi Rob,
I'm using postgres account (which is a superuser in the database) to create table. 
Table creation is working fine in other databases except in this. As postgres user, 
I granted CREATE, USAGE & ALL privileges to postgres account, still it didn't 
work.
Please find the snapshots below where it works fine and also where it doesn't work fine. 
And let me know what rights postgres account is missing to create tables in 
"cms" database.

==
camd=# \c camd postgres
You are now connected to database "camd" as user "postgres".
camd=# create table test(id character varying(80));
CREATE TABLE
camd=# \c cms postgres
You are now connected to database "cms" as user "postgres".
cms=# create table test(id character varying(80));
ERROR:  could not create relation "test": Permission denied
cms=#
==

Thanks,
Venkat


-Original Message-
From: Rob Sargent [mailto:robjsarg...@gmail.com] 
Sent: Tuesday, August 04, 2009 3:33 AM

To: Venkateswara Rao Bondada
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Create table command fails with permission denied

Looks to me as though you are not the owner of the schema nor superuser 
nor in a role with permission to create tables in said schema. See the 
DBA if it's not you. If it is sign on as postgres (superuser) and grant 
yourself some access rights.




Venkateswara Rao Bondada wrote:
  

Hi,

I'm new to PostgreSQL, and currently facing an issue with PostgreSQL 
7.4 database. I'm getting the following error when tried to create a 
table. Please let me know the steps (with queries) that I should take 
care to resolve this issue.


cms=# create table test(id character varying(80));

ERROR: could not create relation "test": Permission denied

Thanks in advance,

Venkat




 CAUTION - Disclaimer *
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely 
for the use of the addressee(s). If you are not the intended recipient, please 
notify the sender by e-mail and delete the original message. Further, you are not 
to copy, disclose, or distribute this e-mail or its contents to any other person and 
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken 
every reasonable precaution to minimize this risk, but is not liable for any damage 
you may sustain as a result of any virus in this e-mail. You should carry out your 
own virus checks before opening the e-mail or attachment. Infosys reserves the 
right to monitor and review the content of all messages sent to or from this e-mail 
address. Messages sent to or from this e-mail address may be stored on the 
Infosys e-mail system.

***INFOSYS End of Disclaimer INFOSYS***
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Determining logically unique entities across many partially complete rows where at least one column matches

2009-08-11 Thread Rob Sargent
Seems to me that if you can safely identify which snippets correspond to 
a given entity you want a single id for the entity.  An entity-snippet 
relationship seems a must. I would not lean too heavily on a single 
table solution unless you're considering arrays for openid,email and 
phone.  (And given the one-to-many-real-people on phone I would be leery 
of this "identifier".)


Jamie Tufnell wrote:

Hi,

I am faced with a modeling problem and thought I'd see if anyone has run
into something similar and can offer some advice.

Basically my problem domain is cataloguing "snippets of information" about
"entities" which are loosely identified.

Entities can be identified up to 3 different methods (email, phone or
openid.)

Entities can have zero or many emails, phone numbers and openids.  The
only restriction is they must have at least one value in one of those three
columns.


Some sample data:

snippet #1
email: null
phone: +1234567890
openid: j...@myopenid.net
information: This is snippet #1

snippet #2
email: f...@bar.com
phone: null
openid: johnny.name
information: This is snippet #2

At this point snippet #1 and #2 could refer to different entities.

snippet #3
email: b...@baz.com
phone: +1234567890
openid: johnny.name
information: This is snippet #3

But now all three snippets definitely refer to the same entity, as far as
we're concerned:

Entity: 1
OpenIDs: johnny.name, j...@myopenid.net
Phones: +1234567890
Emails: f...@bar.com, b...@baz.com

So as far as modeling this goes, I'm stuck between:

1. Normalizing as usual with some serious triggers to maintain the
   relationships.
2. Just having a snippets table with these fields inline and make these
   inferences at query time.
3. Something in between.
4. Using a document store like CouchDB.

The kinds of queries I need to do right now (which will no doubt change):

* Return all snippets.
* Return all distinct entities.
* Find all id for a distinct entity given a single piece of id.
* Find all snippets for a distinct entity.

To do it in one table, I am thinking something like this:

create table snippets (
  id serial not null primary key,
  email text,
  phone_number text,
  openid text,
  information text not null,
  check (email is not null or
 phone_number is not null or openid is not null)
);

with queries like:

* Find all snippets for one distinct entity, searching by openid:

select * from snippets
where phone_number =
  (select phone_number from snippets where openid = 'j...@myopenid.net')
or email =
  (select email from snippets where openid = 'j...@myopenid.net')
or openid in
  (select openid from snippets
   where phone_number =
(select phone_number from snippets where openid = 'j...@myopenid.net')
   or email =
(select email from snippets where openid = 'j...@myopenid.net'));


Or if I was to model as usual I am thinking something like this:

create table entities (
  id serial not null primary key
);

create table entity_has_email (
  entity_id integer not null
references entities (id) on delete cascade on update cascade,
  email text not null unique
);

create table entity_has_phone_number (
  entity_id integer not null
references entities (id) on delete cascade on update cascade,
  phone_number text not null unique
);

create table entity_has_openid (
  entity_id integer not null
references entities (id) on delete cascade on update cascade,
  openid text not null unique
);

create table snippets (
  id serial not null primary key,
  entity_id integer not null
references entities (id) on delete cascade on update cascade,
  information text not null
);

(followed by a mass of on insert/update/delete triggers)

with queries like:

* Find all snippets for a distinct entity, by one identifying field:

select s.* from snippets s
  join entity_has_email e on s.entity_id = e.id
  join entity_has_phone_number p on s.entity_id = p.id
  join entity_has_openid o on s.entity_id = o.id
where o.openid = 'j...@myopenid.net';

Another option, sort of half way between the two could be:

create table snippets (
  id serial not null primary key,
  entity_id integer not null
references entities (id) on delete cascade on update cascade,
  information text not null
);

create table entities (
  id serial not null primary key,
  email text,
  phone_number text,
  openid text,
  check (email is not null or
 phone_number is not null or openid is not null)
);

* Find all snippets for a distinct entity, by openid = 'j...@myopenid.net'

select * from snippets
where entity_id in (
  select id from entities
  where phone_number =
(select phone_number from entities where openid = 'j...@myopenid.net')
  or email =
(select email from entities where openid = 'j...@myopenid.net')
  or openid in
(select openid from entities
 where phone_number =
   (select phone_number from entities where openid = 'j...@myopenid.net'
 or email =
   (select email from entities where openid = 'j...@myopenid.net') ));

Re: [SQL] mail alert

2009-08-11 Thread Rob Sargent

Denis BUCHER wrote:

Hello,

Jan Verheyden a écrit :
  

I was looking in what way it’s possible to alert via mail when some
conditions are true in a database.



a) If the alert is not "very urgent" i.e. you can alter some minutes
later I would do it like this :

1. Create a function that returns what you need, most importantly if the
conditions are met

2. Create a script that does something like "SELECT * FROM function()..."

b) If the email should be sent immediately, you could create a perl
function in the database, launched by a TRIGGER and launching an
"external" script...

Therefore, first you have to know the requirements...

Denis

  

see "check_postgres" for nagios-style monitoring
see \o /tmp/alert_data_file
and \! mailx -s "alert" m...@work.com /tmp/alert_data_file


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] mail alert

2009-08-12 Thread Rob Sargent
It looks to me like you want Dennis's #2.  Lauch a mailer script from a 
trigger function.  (Function can be any language you're familiar with 
including pgsql if you wish to take advantage of

   "\! your-mail-script"

Jan Verheyden wrote:

Hi,

I got a bit stuck... 
I was looking for a solution for option a)


Maybe I'll first explain the situation a bit more:

I have one database for patient registration
Another one for image storage
And a third one for book keeping
A patient should be registered first before the images are stored, so if there 
is a new line in the second database with an id which does not exist yet, it 
has to be notified in the book keeping database.

Now the questions:
	1) Can I do this with the inner join (tables subject_id from DB1, pat_id from DB2), there it is two different databases 
	2) Once it is notified in the book keeping that is not registered yet, is it best to poll on this column to send a warning, or use a trigger??


Thanks!!


Jan
-Original Message-
From: Rob Sargent [mailto:robjsarg...@gmail.com] 
Sent: Wednesday, August 12, 2009 3:38 AM

To: Denis BUCHER
Cc: Jan Verheyden; 'pgsql-sql@postgresql.org'
Subject: Re: [SQL] mail alert

Denis BUCHER wrote:
  

Hello,

Jan Verheyden a écrit :
  


I was looking in what way it’s possible to alert via mail when some
conditions are true in a database.

  

a) If the alert is not "very urgent" i.e. you can alter some minutes
later I would do it like this :

1. Create a function that returns what you need, most importantly if the
conditions are met

2. Create a script that does something like "SELECT * FROM function()..."

b) If the email should be sent immediately, you could create a perl
function in the database, launched by a TRIGGER and launching an
"external" script...

Therefore, first you have to know the requirements...

Denis

  


see "check_postgres" for nagios-style monitoring
see \o /tmp/alert_data_file
and \! mailx -s "alert" m...@work.com /tmp/alert_data_file

  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: Re[SQL] write multiple joins...

2009-08-21 Thread Rob Sargent
Well indexing (or lack thereof) could be the real problem but you could 
try "chaining" the tables


select * from sale s, taxes t, property p, buyer b
where s.id = t.id and t.id = p.id and p.id = b.id

and see if that (or other combination) changes the query plan appreciably.


(I would have to wonder if all those records are really unique per 
sale.id? e.g. multiple buyers per unit sold or school and property tax 
in separate records. And wondering further why, if you have sql access, 
do you need a single table. Maybe you want a view for convenience (e.g. 
select * from view_of_sales where lower(county) like 'a%' )?


tripplowe wrote:

Hey Folks,

I have a query that seems like it will never end.  I'm hoping you can help
me rewrite it.

I have 4 tables that contain information about house sales within the state
(sale, taxes, property, buyer).  Each table has an 'id' field that links
each record.  I am just trying to flatten these files so I will have all of
the information in a single file.  Below is my attempt to do so.  Is there a
more efficient way to rewrite the joins?

select * from sale, taxes, property, buyer 
where sale.id = taxes.id and sale.id = property.id and sale.id = buyer.id


I'm using postgresql 8.3 on a Windows 7 machine.  


Thank You,
-Tripp
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: re[SQL] solved: WITH RECURSIVE: ARRAY[id] All column datatypes must be hashable

2009-08-27 Thread Rob Sargent
In so much as id-ma-pa is near and dear to my heart, I would really 
appreciate and performance metrics you might be able to share. 
Especially size of person table, typical pedigree size and pedigree 
retrieval time (tainted by hardware specs of course).


Thanks

rjs



rawi wrote:

me again...

I figured it out...

The Error was caused because I used UNION in place of UNION ALL.

I still don't understand why the ARRAY (path) could not be grouped...

Bye
rawi
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] [GENERAL] Data audit trail techniques in postgresql

2009-08-27 Thread Rob Sargent

tablelog doesn't appear any more lively than the OPs audittrail2.

Alvaro Herrera wrote:

Nathaniel Smith wrote:

  

What do others use to accomplish this? Do most pg users just write
triggers by hand? Or is there some nice auditing module that Google
just isn't revealing to me?



I think tablelog (to be found in pgfoundry too) is the most commonly
used audit module.

  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] [GENERAL] Data audit trail techniques in postgresql

2009-08-27 Thread Rob Sargent



Alvaro Herrera wrote:

Rob Sargent escribió:
  

tablelog doesn't appear any more lively than the OPs audittrail2.



Perhaps, but I have heard of people using it successfully recently,
whereas Nathaniel reported that audittrail2 seems to have obvious bugs.

  

Fair enough.



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: re[SQL] cursively isolate pedigrees, based only on person_id, mother_id and father_id

2009-08-28 Thread Rob Sargent
On the assumption that you wish to generate the pedigrees for analysis 
or charting,
why not perform the recursion in those layers (or their supporting 
software), This
does not require a large number of sql calls since using an in-clause 
each call
will gather one generation (ascending or descending) for all new persons 
retrieved
in any iteration. How deep are your pedigrees and what is their typical 
size?



rawi wrote:

Hello!

Given a table "persons", where the individuals are bound to each other only
via the foreign keys mother_id and father_id as self joins on persons, like
that

create table persons (
id serial primary key,
sex character(1),
father_id integer default null references persons(id) on update cascade on
delete set null,
mother_id integer default null references persons(id) on update cascade on
delete set null
);

INSERT INTO persons(id, sex, father_id,mother_id) VALUES(1,'m',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(2,'f',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(8,'m',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(9,'f',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(3,'m',1,2);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(4,'f',8,9);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(5,'f',3,4);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(6,'m',3,4);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(7,'m',8,9);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(10,'m',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(11,'m',10,5);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(12,'f',7,5); --
consanguinity
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(100,'m',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(200,'f',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(300,'m',100,200);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(400,'f',null,null);
INSERT INTO persons(id, sex, father_id,mother_id) VALUES(500,'f',300,400);

These would be the graphs of the 2 families:
http://www.nabble.com/file/p25191664/family1.jpg  
http://www.nabble.com/file/p25191664/family2.jpg 


I hoped to find a recursive SQL or function, which would extract the WHOLE
family of any given person.id from the table with many families.

After failing to accomplish this with a recursive SQL I found on the web an
advice from Celko (???)
http://www.eggheadcafe.com/conversation.aspx?messageid=29498840&threadid=29498808
to better keep away form such things... :(

Did someone gathered a closer expertise to that?

Thank you very much for any hint!
Regards
Rawi
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Odd sort behaviour

2009-09-01 Thread Rob Sargent

Since when does "." sort as "nothing at all"
This select

   select
   distinct u.user_name
   from
   subscriber_user u,
   subscription s,
   subscription_template t
   where
   u.id = s.subscriber_entity_id
   and s.template_id = t.id
   and t.application_id = (select id from application where
   short_name ='books')
   order by u.user_name
   \p\g

over this table def.(for user_name)

\d subscriber_user
 Table "public.subscriber_user"
   +---+-+---+
   |Column |  Type   |   Modifiers   |
   +---+-+---+
   | id| bigint  | not null  |
   | last_name | text|   |
   | first_name| text|   |
   | user_name | text|   |
   | email_address | text|   |
   | force_password_change | boolean | default false |
   | title | text|   |
   +---+-+---+
   Indexes:
   "subscriber_user_pkey" PRIMARY KEY, btree (id)
   "idx__subscriber_users__lower_email_address" UNIQUE, btree
   (lower(email_address))
   "idx__subscriber_users__lower_user_name" UNIQUE, btree
   (lower(user_name))
   Foreign-key constraints:
   "subscriber_user_id_fkey" FOREIGN KEY (id) REFERENCES subscriber(id)


is producing this sorted??? output


   | adrianoha...@hotmail.com   |
   | adx...@show.org.tw |
   | aeche...@mac.com   |
   | a.eck...@gmx.de|
   | aelef...@unina.it  |
   | aeo...@hotmail.com |
   | a.fisched...@t-online.de   |
   | aflores3...@gmail.com  |
   | afr...@advancedneurosurgeons.com   |
   | agave...@comcast.net   |
   | agelsin...@amirsys.com |
   | agis1...@yahoo.gr  |

using this client

   Welcome to psql 8.3.7, the PostgreSQL interactive terminal.

and this server

   show server_version;
   ++
   | server_version |
   ++
   | 8.3.7  |
   ++
   (1 row)

   nsm=# show server_encoding

   +-+
   | server_encoding |
   +-+
   | UTF8|
   +-+
   (1 row)





--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Odd sort behaviour

2009-09-01 Thread Rob Sargent

How many ways might one accidentally do that I wonder.

Scott Marlowe wrote:

On Tue, Sep 1, 2009 at 3:01 PM, Rob Sargent wrote:
  

Since when does "." sort as "nothing at all"



Since you set your locale equal to something like en_US instead of C
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Odd sort behaviour

2009-09-01 Thread Rob Sargent
OK, I'm waking up now.  My locale is as Scott suspected, en-US.UTF-8, 
and of

course my server too.

I guess I never really left "C" intellectually :) and we have a server that
thinks SQL-ASCII is cool and comparing lists of names and emails between 
that server

and my local utf-8 one was rather perplexing.

I'm sure this a life-time's worth of discussion on the merits of 
treating "."

as nothing when sorting

Sorry for the noise.


Greg Stark wrote:

On Tue, Sep 1, 2009 at 11:31 PM, Rob Sargent wrote:
  

How many ways might one accidentally do that I wonder.



Well most operating system distributions ask you when you install them
what region you're in and use a collation for that region.

In 8.4 you can check what collation a database is set to use with \l
in psql. In 8.3 the entire "cluster" has a single collation which you
can see using "show lc_collate".

You can see how your system's collations work by running sort:

$ LC_ALL=c sort s
a.eck...@gmx.de
a.fisched...@t-online.de
adrianoha...@hotmail.com
adx...@show.org.tw
aeche...@mac.com
aelef...@unina.it
aeo...@hotmail.com
aflores3...@gmail.com
afr...@advancedneurosurgeons.com
agave...@comcast.net
agelsin...@amirsys.com
agis1...@yahoo.gr

$ LC_ALL=en_US sort s
adrianoha...@hotmail.com
adx...@show.org.tw
aeche...@mac.com
a.eck...@gmx.de
aelef...@unina.it
aeo...@hotmail.com
a.fisched...@t-online.de
aflores3...@gmail.com
afr...@advancedneurosurgeons.com
agave...@comcast.net
agelsin...@amirsys.com
agis1...@yahoo.gr

  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Question

2009-09-02 Thread Rob Sargent

Call nextval first?

Too many quotes?

aymen marouani wrote:
Hi for all, 
What is the possible sources of the SQLState 55000 "OBJECT NOT IN 
PREREQUISITE STATE" ? 
The error 55000 was triggered when I executed the following query :
"select currval('"BatchTreatment_batch_treatment_id_seq"');" 

Thanks in advance. 


Cordially.


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to: refer to select list calculations other places in the calculations.

2009-09-10 Thread Rob Sargent
I might do the substitution, analyse the query and hope to break it up. 
Can it possibly be optimal?


Little, Douglas wrote:
Hi Rob, 
Thanks for the response.
Repeating the base calculation will work but the calculations are  stunningly complex and long.  Repeating them will undoubtly increase editing errors and maintenance problems. 


Wish I had a sample to post.  Last query I analyzed was approx 10k lines and 
printed to over 50 pages.   Yes, it was a single query.  I'm amazed that cognos 
would generate.  Basically they generate a pivoted cross-tab from sql.

Thanks
Doug




-Original Message-
From: Rob Sargent [mailto:robjsarg...@gmail.com] 
Sent: Wednesday, September 09, 2009 11:59 PM

To: Little, Douglas
Subject: Re: [SQL] how to: refer to select list calculations other places in 
the calculations.

Would reverting to the base columns work?

select a, b, a+b as c, (a+b)*d, (a+b)*f etc.

Little, Douglas wrote:
  

Hi,

I have very complex sql (generated by cognos) I'm porting from 
Teradata that basically uses column alias in the calculated columns.


Is there any way to do this type of thing in Psql?

Essentially I need to do is refer to a calculated column later in the 
select list (but far more complex - like 50 pages printed out)


sel a,b,a+b as c, c*d, c*f, c*g, c*h, c*I, c*j, c*k,.. --- where c is 
a calculated column used in other calculations.


from 

**Doug Little**

Sr. Data Warehouse Architect | Enterprise Data Management | Orbitz 
Worldwide


500 W. Madison, Suite 1000 Chicago IL 60661| Office 312.260.2588 | Fax 
312.894.5164 | Cell 847-997-5741



  ***douglas.lit...@orbitz.com* <mailto:douglas.lit...@orbitz.com>**__

orbitz.com <http://www.orbitz.com/> | ebookers.com 
<http://www.ebookers.com/> | hotelclub.com <http://www.hotelclub.com/> 
| cheaptickets.com <http://www.cheaptickets.com/> | ratestogo.com 
<http://www.ratestogo.com/> | asiahotels.com <http://www.asiahotels.com/>





--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] CHECK constraint on multiple tables

2009-09-15 Thread Rob Sargent

Mario Splivalo wrote:

Tom Lane wrote:
  

Mario Splivalo  writes:


I have two tables, tableA and tableB:
CREATE TABLE tableA (idA integer primary key, email character varying
unique);
CREATE TABLE tableB (idB integer primary key, email character varying
unique);
  
Now, I want to create check constraint in both tables that would

disallow records to either table where email is 'mentioned' in other table.
  

Have you considered refactoring so there's only one table?



Unfortunately I can't do that, due to the
object-relational-mapper-wrapper-mambo-jumbo.

The only 'logical' idea that I can think of is separating emails to the
third table, and then use UNIQUE constraint on the email field on that
table, and then use FK constraint so that email fields in tables tableA
and tableB points to the email in the table emails.

Mario

  
Is that wrapper hibernate by any chance?  If so you might try adding a 
discriminator column to the single table TL suggested and map each class 
accordingly.


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Hibernate, web application and only one sequence for all primary keys

2009-09-17 Thread Rob Sargent
Above all, do not fret about whether or not it is "cool to lose" some 
ids.  There are plenty of integers; the ids need not be consecutive.  I 
don't think Grails requires a single sequence source and I know 
hibernate does not.  Hibernate will allow one to inject any sequence/id 
generator you wish to use (including one which generates UUIDs).


rawi wrote:

Leo Mannhart wrote:
  

Caveat: If you use the standard sequence generator in hibernate, it is
not using the postgres-sequence in the "usual" manner. hibernate itself
caches 50 ID's as sequence numbers by default. This means, hibernate
only does a select on the database sequence every 50 numbers. it
multyplies the database sequence by 50 to get the "real" sequence
number. it generates the sequence numbers in blocks of 50 numbers or
according to the sequence cache size.
That said, you would probably not see any performance bottlenecks
because of the sequence number generator in the database, even with
thousands of inserts per second.




Hi Leo, thank you for the explanation!

I don't know if it is that cool to lose up to 50 IDs on each session-end of
Hibernate...
And what do you suppose it would happen, if I set the cache size of
Hibernate's own sequence (after generation) by hand to 1 instead of 50? I
wouldn't need tausends of inserts per second...

Kind regards, Rawi

  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Working slow

2009-09-21 Thread Rob Sargent

Judith Altamirano wrote:
hello every body, I'm having a data base in a point of sale that is 
getting frozen, I already have run a vacuum -z -d to reindex the data 
base and nothing happens.. Some suggestions to speed the process, 
Do you guys think that the data base is nearly to broke?



As well as the good points Scott Marlow raised, does the application 
perform well immediately after a reboot and then slowdown or do all 
queries take longer than they should (and by how much)?


Do you have the sql for typical queries and can you run them with ANALYSE?

PS. (Based on the post time vs clock time, I suspect the you machine's 
clock is out of sync)






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] selecting latest record

2009-09-22 Thread Rob Sargent
Let's say there's an index on the date column: Does the where clause 
approach necessarily out perform the distinct on version? Hoping the OP 
has enough data to make analyse useful.


A. Kretschmer wrote:

In response to Louis-David Mitterrand :
  

Hi,

I have a simple table 

price(id_product, price, date) 


which records price changes for each id_product. Each time a price
changes a new tuple is created.

What is the best way to select only the latest price of each id_product?



There are several ways to do that, for instance with DISTINCT ON (only
postgresql):

test=*# select * from price ;
 id_product | price |   datum
+---+
  1 |10 | 2009-09-01
  1 |12 | 2009-09-10
  2 |11 | 2009-09-10
  2 | 8 | 2009-09-13
(4 rows)

test=*# select distinct on (id_product) id_product, price from price order by 
id_product, datum desc;
 id_product | price
+---
  1 |12
  2 | 8
(2 rows)

Andreas
  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Data integration tool in Chinese?

2009-09-22 Thread Rob Sargent
Are you asking that all strings be stored into the other three languages 
as part of (potentially many-master) replication?


hfdabler wrote:
Hello to all, 


Being in a pretty much international company, I have come here to ask a few
things about ETL tools and their different languages. 


We have offices in the US, in Europe (Italy, France) and in China. We think
English is fine but our European team and Chinese team especially would like
to get software in their language. 


What we are trying to find is an ETL program to perform data integration,
data synchronization and deduplication on our database in French, Italian
and especially in Chinese. 


Thanks for your help.
  



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] reading last inserted record withoud any autoincrement field

2009-10-04 Thread Rob Sargent

Osvaldo Kussama wrote:

2009/10/4 mohammad qoreishy 
  

How can get last inserted record in a table without any autoincrement filed?
I need to  frequently fetch the last inserted record.
If I must use the "Cursor" please explain your solution.





RETURNING clause?
http://www.postgresql.org/docs/current/interactive/sql-insert.html

Osvaldo

  
It took the OP to mean last insert as in randomly in the past, not as 
part of current transaction.  My fear is OP's schema has no way of 
identifying time-of-insert, nor a monotonically increasing record id and 
is hoping postgres has a some internal value that will return the most 
recently inserted record. Without a table definition it's hard to say.


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Efficiency in Multiple Queries of Same Table in a PL/PgSQL Function

2009-10-16 Thread Rob Sargent
I don't see anything in the assignment statements (sigma_* :=) which 
would prevent one from doing all three of them within a single for 
loop.  In fact, written as is there's some chance the values of the 
sigma_*s might change between repeated calls to the function since there 
is no explicit ordering of the rows returned from table blah.  Putting 
all the assignments into a single select from blah would at least say 
that the sigma values are from the same dataset per run.



As to efficiency in general, I would expect the entire  table (~50 rows) 
would be entirely in memory after the first select, but you plan triples 
the time in the loop.  This expense would likely only be noticeable if 
the function itself is called /lots/.


Gary Chambers wrote:

All...

In the poly_example function below, I am emulating an actual
requirement by querying the same table three (3) times in order to
derive a solution to a problem.  Is this the best or most efficient
and effective way to implement this?  The table (which consists of
only five (5) FLOAT8 columns) I'm querying contains less than 50 rows.
 Thanks in advance for any insight or criticisms you offer.

CREATE OR REPLACE FUNCTION poly_example() RETURNS SETOF FLOAT8 AS
$poly_example$
DECLARE
term blah%ROWTYPE;
sigma_l FLOAT8 := 0.0;
sigma_b FLOAT8 := 0.0;
sigma_r FLOAT8 := 0.0;

BEGIN
FOR term in SELECT * FROM blah LOOP
sigma_l := sigma_l + (RANDOM() * 100) * (term.i * term.i) +
RANDOM() * (term.j * term.j) + term.k;
END LOOP;

FOR term in SELECT * FROM blah LOOP
sigma_b := sigma_b + (RANDOM() * 53) * (term.i * term.i) +
(RANDOM() * 5) * (term.j * term.j) + term.k;
END LOOP;

FOR term in SELECT * FROM blah LOOP
sigma_r := sigma_r + 96.232234 * (term.i * term.i) +
0.32322325 * (term.j * term.j) + term.k;
END LOOP;

RETURN NEXT sigma_l + sigma_b + sigma_r;
END;
$poly_example$ LANGUAGE plpgsql STRICT IMMUTABLE;

-- Gary Chambers

/* Nothing fancy and nothing Microsoft! */

  



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Efficiency in Multiple Queries of Same Table in a PL/PgSQL Function

2009-10-17 Thread Rob Sargent
But if I read the OP correctly the sigma are in fact used additively in 
each row in blah.  "sigma_* = sigma_* +"


matthias schoeneich wrote:

Hi,

as you don't seem to need the sigma_*'s, you could calc the whole 
result with one query using:


CREATE OR REPLACE FUNCTION poly_example2() RETURNS SETOF FLOAT8 AS
$poly_example$
DECLARE
f_result   FLOAT8 := 0.0;
i_rowcount INT:= 0  ;

BEGIN
  SELECT sum((RANDOM() * 100 ) * (term.i * term.i) + RANDOM() * 
(term.j * term.j) + term.k) +
 sum((RANDOM() *  53 ) * (term.i * term.i) +(RANDOM()* 5) * 
(term.j * term.j) + term.k) +
 sum(96.232234 * (term.i * term.i) + 0.32322325   * 
(term.j * term.j) + term.k) ,

 count(*)
INTO f_result  ,
 i_rowcount
   FROM blah AS term;

  IF i_rowcount > 0 THEN
RETURN NEXT f_result;
  ELSE
RETURN NEXT 0;
  END IF;
END;
$poly_example$ LANGUAGE plpgsql STRICT IMMUTABLE;

I've just put it in your plpgsql body to handle the case where table 
blah contains no rows.


Matthias

Rob Sargent schrieb:
I don't see anything in the assignment statements (sigma_* :=) which 
would prevent one from doing all three of them within a single for 
loop.  In fact, written as is there's some chance the values of the 
sigma_*s might change between repeated calls to the function since 
there is no explicit ordering of the rows returned from table blah.  
Putting all the assignments into a single select from blah would at 
least say that the sigma values are from the same dataset per run.



As to efficiency in general, I would expect the entire  table (~50 
rows) would be entirely in memory after the first select, but you 
plan triples the time in the loop.  This expense would likely only be 
noticeable if the function itself is called /lots/.


Gary Chambers wrote:

All...

In the poly_example function below, I am emulating an actual
requirement by querying the same table three (3) times in order to
derive a solution to a problem.  Is this the best or most efficient
and effective way to implement this?  The table (which consists of
only five (5) FLOAT8 columns) I'm querying contains less than 50 rows.
 Thanks in advance for any insight or criticisms you offer.

CREATE OR REPLACE FUNCTION poly_example() RETURNS SETOF FLOAT8 AS
$poly_example$
DECLARE
term blah%ROWTYPE;
sigma_l FLOAT8 := 0.0;
sigma_b FLOAT8 := 0.0;
sigma_r FLOAT8 := 0.0;

BEGIN
FOR term in SELECT * FROM blah LOOP
sigma_l := sigma_l + (RANDOM() * 100) * (term.i * term.i) +
RANDOM() * (term.j * term.j) + term.k;
END LOOP;

FOR term in SELECT * FROM blah LOOP
sigma_b := sigma_b + (RANDOM() * 53) * (term.i * term.i) +
(RANDOM() * 5) * (term.j * term.j) + term.k;
END LOOP;

FOR term in SELECT * FROM blah LOOP
sigma_r := sigma_r + 96.232234 * (term.i * term.i) +
0.32322325 * (term.j * term.j) + term.k;
END LOOP;

RETURN NEXT sigma_l + sigma_b + sigma_r;
END;
$poly_example$ LANGUAGE plpgsql STRICT IMMUTABLE;

-- Gary Chambers

/* Nothing fancy and nothing Microsoft! */

  








--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Random Unique Id

2009-10-20 Thread Rob Sargent


Nahuel Alejandro Ramos wrote:
> Hi all,
>   I was searching for a sequence (for serials) that let me use a random
> unique number ID on a Primary Key or a simple index.
>   I have not found a solution so I have done it by myself. I would like
> to share it so here it is:
>  
> -- 
> -- Create language "plpgsql"
> -- 
> CREATE LANGUAGE plpgsql;
> 
> -- 
> -- Table structure for "public"."tarjeta"
> -- 
> drop table "public"."tarjeta";
> CREATE TABLE "public"."tarjeta"(
> "idtarjeta" int8 NOT NULL DEFAULT randomuniqueidtarjeta(),
> "fechaemision" timestamp NOT NULL DEFAULT now(),
> "descripcion" varchar(255) ,
> PRIMARY KEY ("idtarjeta")
> )  WITHOUT OIDS;
> 
> -- 
> -- Definition of function  "randomuniqueidtarjeta"
> -- 
> 
> CREATE OR REPLACE FUNCTION randomUniqueIdTarjeta() RETURNS bigint AS $$
> 
> DECLARE
> 
> -- SET THE KEY SIZE (IN CHARACTERS)
> idSize constant  integer := 10;
> 
> sizeMultiplicator constant bigint := 10 ^ idSize;
> 
> loopLimit  bigint := sizeMultiplicator * 4;
> randomNumber bigint;
> canIUseIt boolean := false;
> 
> BEGIN
> 
> -- LOOP UNTIL FIND AN UNIQUE RANDOM NUMBER OR FILL THE LOOP LIMIT
> WHILE ( not ( canIUseIt ) and ( loopLimit > 0) ) LOOP
> 
> -- CALCULATE A TEN DIGITS RANDOM NUMBER
> randomNumber := CAST ( ( random() * sizeMultiplicator ) AS bigint );
> 
> -- VALIDATE THAT THE NUMBER WON'T START WITH 0
> IF ( (randomNumber >= sizeMultiplicator / 10 ) and (
> randomNumber < sizeMultiplicator ) ) THEN
> 
> -- CHECK THAN RANDOM NUMBER NOT EXISTS ON TABLE
> -- IMPORTANT!!! CHANGE WITH TABLE AND FIELD TO SEARCH FOR A
> DUPLICATATION
> PERFORM idtarjeta FROM tarjeta WHERE idtarjeta = randomNumber;
> IF NOT FOUND THEN
> canIUseIt = true;
> END IF;
> 
> END IF;
> 
> loopLimit = loopLimit - 1;
> 
> END LOOP;
> 
> -- IF LOOP ENDS WITH A UNIQUE RANDOM NUMBER RETURNS THE NUMBER
> CONCATENATED WITH A VERIFICATION NUMBER  ELSE RAISE AN EXCEPTION
> IF ( canIUseIt ) THEN
> RETURN CAST ( ( CAST ( randomNumber AS text ) || CAST (
> verificationNumber( CAST (randomNumber AS text ) ) AS text  ) ) AS 
> bigint )  ;
> ELSE
> RAISE EXCEPTION 'Could not calculate a Random Unique ID on table
> Tarjeta.';
> END IF;
> 
> END;
> $$ LANGUAGE plpgsql;
> 
> -- 
> -- Definition of function  "randomuniqueidtarjeta"
> -- 
> 
> CREATE OR REPLACE FUNCTION verificationNumber(id text) RETURNS integer AS $$
> DECLARE
> 
> auxValue integer := 0;
> verifyArray constant integer[] :=ARRAY [5,4,3,2,7,6,5,4,3,2,1];
> verificationNumber integer := 0;
> 
> BEGIN
> 
> -- ASK FOR ID LENGTH EQUAL TO 10 DIGITS. THE ALGORITHM WORKS ONLY FOR
> IF ( LENGTH( id ) <> 10 ) THEN
> 
> RAISE EXCEPTION 'Could not calculate a verification number. The
> ID must have 10 digits.';
> 
> ELSE
> 
> -- LOOP ON THE TEN DIGITS AND MULTIPLY WITH THE SAME POSITION ON
> virifyArray
> FOR digit IN 1..10 LOOP
> auxValue = auxValue + ( CAST(SUBSTR(id, digit, 1) AS int) *
> verifyArray[digit] );
> END LOOP;
> 
> -- CALCULATE THE VERIFICATION NUMBER
> verificationNumber = 11 - (auxValue % 11);
> 
> -- REPLACE THE TWO DIGITS  VERIFICATION NUMBER
> IF( verificationNumber = 11 ) THEN
> RETURN 0;
> ELSEIF ( verificationNumber = 10 ) THEN
> RETURN 9;
> ELSE
> RETURN verificationNumber;
> END IF;
> 
> END IF;
> 
> END;
> $$ LANGUAGE plpgsql;
> 
> -- 
> -- INSERTs to probe the functions
> -- 
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 1');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 2');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 3');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 4');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 5');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 6');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 7');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 8');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 9');
> INSERT INTO tarjeta (descripcion) VALUES ('Tarjeta 10');
> 
> 
>   It is my first post so sorry about the format of the SQL Dump and
> sorry about my english.
>   Note: there is a second function that calculate a verification
> number like an argentinian code called CUIL (only available for 10
> digits numbers)
>   Regards
> 
> Nahuel Alejandro Ramos.
> 

You didn't like UUID?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.or

Re: [SQL] Random Unique Id

2009-10-20 Thread Rob Sargent
Suit yourself, of course, but the numbers on my credit cards are far,
far from random :)


Nahuel Alejandro Ramos wrote:
> Yes. I looked this solution but it is not a "only numbers" ID. I would
> like a random unique "number" Id. For example: generate a credit number
> randomly (like the example I post).
> I used to insert an MD5 field but this time I need "only numbers" Id.
> Regards...
> 
>   Nahuel Alejandro Ramos.
> 
> 
> On Tue, Oct 20, 2009 at 4:53 PM, Rob Sargent  <mailto:robjsarg...@gmail.com>> wrote:
> 
> 
> 
> Nahuel Alejandro Ramos wrote:
> > Hi all,
> >   I was searching for a sequence (for serials) that let me use a
> random
> > unique number ID on a Primary Key or a simple index.
> >   I have not found a solution so I have done it by myself. I would
> like
> > to share it so here it is:
> >
> > -- 
> > -- Create language "plpgsql"
> > -- 
> > CREATE LANGUAGE plpgsql;
> >
> > -- 
> > -- Table structure for "public"."tarjeta"
> > -- 
> > drop table "public"."tarjeta";
> > CREATE TABLE "public"."tarjeta"(
> > "idtarjeta" int8 NOT NULL DEFAULT randomuniqueidtarjeta(),
> > "fechaemision" timestamp NOT NULL DEFAULT now(),
> > "descripcion" varchar(255) ,
> > PRIMARY KEY ("idtarjeta")
> > )  WITHOUT OIDS;
> >
> > -- 
> > -- Definition of function  "randomuniqueidtarjeta"
> > -- 
> >
> > CREATE OR REPLACE FUNCTION randomUniqueIdTarjeta() RETURNS bigint
> AS $$
> >
> > DECLARE
> >
> > -- SET THE KEY SIZE (IN CHARACTERS)
> > idSize constant  integer := 10;
> >
> > sizeMultiplicator constant bigint := 10 ^ idSize;
> >
> > loopLimit  bigint := sizeMultiplicator * 4;
> > randomNumber bigint;
> > canIUseIt boolean := false;
> >
> > BEGIN
> >
> > -- LOOP UNTIL FIND AN UNIQUE RANDOM NUMBER OR FILL THE LOOP LIMIT
> > WHILE ( not ( canIUseIt ) and ( loopLimit > 0) ) LOOP
> >
> > -- CALCULATE A TEN DIGITS RANDOM NUMBER
> > randomNumber := CAST ( ( random() * sizeMultiplicator ) AS
> bigint );
> >
> > -- VALIDATE THAT THE NUMBER WON'T START WITH 0
> > IF ( (randomNumber >= sizeMultiplicator / 10 ) and (
> > randomNumber < sizeMultiplicator ) ) THEN
> >
> > -- CHECK THAN RANDOM NUMBER NOT EXISTS ON TABLE
> > -- IMPORTANT!!! CHANGE WITH TABLE AND FIELD TO SEARCH
> FOR A
> > DUPLICATATION
> > PERFORM idtarjeta FROM tarjeta WHERE idtarjeta =
> randomNumber;
> > IF NOT FOUND THEN
> > canIUseIt = true;
> > END IF;
> >
> > END IF;
> >
> > loopLimit = loopLimit - 1;
> >
> > END LOOP;
> >
> > -- IF LOOP ENDS WITH A UNIQUE RANDOM NUMBER RETURNS THE NUMBER
> > CONCATENATED WITH A VERIFICATION NUMBER  ELSE RAISE AN EXCEPTION
> > IF ( canIUseIt ) THEN
> > RETURN CAST ( ( CAST ( randomNumber AS text ) || CAST (
> > verificationNumber( CAST (randomNumber AS text ) ) AS text  ) ) AS
> > bigint )  ;
> > ELSE
> > RAISE EXCEPTION 'Could not calculate a Random Unique ID on
> table
> > Tarjeta.';
> > END IF;
> >
> > END;
> > $$ LANGUAGE plpgsql;
> >
> > -- 
> > -- Definition of function  "randomuniqueidtarjeta"
> > -- 
> >
> > CREATE OR REPLACE FUNCTION verificationNumber(id text) RETURNS
> integer AS $$
> > DECLARE
> >
> > auxValue integer := 0;
> > verifyArray constant integer[] :=ARRAY [5,4,3,2,7,6,5,4,3,2,1];
> > verificationNumber integer := 0;
> >
> > BEGIN
> >
> > -- ASK FOR ID LENGTH EQUAL TO 10 DIGITS. THE ALGORITHM WORKS
> ONLY FOR
> > IF ( LENGTH( id ) <> 10 ) T

Re: [SQL] report generation from table.

2009-10-21 Thread Rob Sargent


A. Kretschmer wrote:
> In response to sathiya psql :
>> Hi All,
>>
>> I have been searching for, Preparing report from a huge table.
>>
>> Queries am executing now are,
>> SELECT count(*) from HUGE_TBL where cond1, cond2;
>> SELECT count(*) from HUGE_TBL where cond1, cond3;
>> --- like this i have different conditions(10-15 different things ) and
>> different amount of conditions ( 2 - 5 condition ).
>>
>> As that is a very huge table, it took time to execute each query ( 2 min ). 
>> And
>> finally it takes 15 times that ( 30 min ).  Is there anyway to do the above
>> report efficiently ?
>>
>> Such executing the query only once, and getting all different outputs 
>> required
>> ? or fetching result from some cache ?
> 
> select sum(case when  and  then 1 else 0 end) as query1,
> sum(case when  and  then 1 else 0 end) as query2, ... from
> table;
> 
> 
> It forces a whole seq. scan, but only once.
> 
> 
> Regards, Andreas


Probably should check (analyse) the various queries separately to see if
any of them scan the huge table.  If one does scan the table, then give
Andreas's plan should be fine.  However, it's possible that your
conditions are all hitting indexes and not scanning, in which case you
may be better off as is.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] slow count(CASE) query

2009-10-29 Thread Rob Sargent


Grant Masan wrote:
> Hi all, 
> 
> I have this kind of query that I need to do, yes my query is giving
> right answers now but it is long and slow. I am now asking you that if 
> you have another solution for my query to make that more smarter ! Hope
> you can help me with this ! 
> 
> 
> select '000100' as length, sum(ship1) as ship1 ,sum(ship2) as
> ship2,sum(ship3) as ship3,sum(ship4) as ship4,sum(ship5) as ship5 FROM 
> (select 
> count(CASE WHEN (type between 40 and 49)  THEN 1 ELSE NULL END) as ship1, 
> count(CASE WHEN (type between 60 and 69) THEN 1 ELSE NULL END) as ship2, 
> count(CASE WHEN (type between 70 and 79) THEN 1 ELSE NULL END) as ship3, 
> count(CASE WHEN (type between 80 and 89)  THEN 1 ELSE NULL END) as ship4, 
> count(CASE WHEN (type >90) THEN 1 ELSE NULL END) as ship5 
> FROM school_proj_boat where length <100 
> GROUP BY type 
> ORDER BY type) as koo 
> 
> UNION ALL 
> 
> select '100200' as length, sum(ship1) as ship1 ,sum(ship2) as
> ship2,sum(ship3) as ship3,sum(ship4) as ship4,sum(ship5) as ship5 FROM 
> (select 
> count(CASE WHEN (type between 40 and 49)  THEN 1 ELSE NULL END) as ship1, 
> count(CASE  WHEN (type between 60 and 69) THEN 1 ELSE NULL END) as ship2, 
> count(CASE WHEN (type between 70 and 79) THEN 1 ELSE NULL END) as ship3, 
> count(CASE  WHEN (type between 80 and 89)  THEN 1 ELSE NULL END) as ship4, 
> count(CASE  WHEN (type >90) THEN 1 ELSE NULL END) as ship5 
> FROM school_proj_boat where length between 100 and 200 
> GROUP BY type 
> ORDER BY type) as koo 
> 
> UNION ALL 
> 
> select '200300' as length, sum(ship1) as ship1 ,sum(ship2) as
> ship2,sum(ship3) as ship3,sum(ship4) as ship4,sum(ship5) as ship5 FROM 
> (select 
> count(CASE WHEN (type between 40 and 49)  THEN 1 ELSE NULL END) as ship1, 
> count(CASE  WHEN (type between 60 and 69) THEN 1 ELSE NULL END) as ship2, 
> count(CASE WHEN (type between 70 and 79) THEN 1 ELSE NULL END) as ship3, 
> count(CASE  WHEN (type between 80 and 89)  THEN 1 ELSE NULL END) as ship4, 
> count(CASE  WHEN (type >90) THEN 1 ELSE NULL END) as ship5 
> FROM school_proj_boat where length between 200 and 300 
> GROUP BY type 
> ORDER BY type) as koo 
> 
> UNION ALL 
> 
> select '300999' as length, sum(ship1) as ship1 ,sum(ship2) as
> ship2,sum(ship3) as ship3,sum(ship4) as ship4,sum(ship5) as ship5 FROM 
> (select 
> count(CASE WHEN (type between 40 and 49)  THEN 1 ELSE NULL END) as ship1, 
> count(CASE  WHEN (type between 60 and 69) THEN 1 ELSE NULL END) as ship2, 
> count(CASE WHEN (type between 70 and 79) THEN 1 ELSE NULL END) as ship3, 
> count(CASE  WHEN (type between 80 and 89)  THEN 1 ELSE NULL END) as ship4, 
> count(CASE  WHEN (type >90) THEN 1 ELSE NULL END) as ship5 
> FROM school_proj_boat where length >300 
> GROUP BY type 
> ORDER BY type) as koo


Not thrilled with the "coded" length in the returned record but you
could CASE that too and then you would only read the data once.

If you need the explicit length range in the final result you could do

select mod(length,100) * 100 as low_length, (mod(length,100) + 1) * 100
as high_length
group by low_length, high_length, type

You would of course get more type of records 400-500, 500-600 etc but
again it all happens in a single read.

And temp tables can be your friend too!

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Sqldf - error message

2009-11-20 Thread Rob Sargent
I think you (or I) might misunderstand TL's comment.  The sql is
probably being sent by some late-in-the-client-side-game driver with
names quoted with double-quote marks, but the database does not have the
column names etc  in mixed case.  Can you connect directly to the
postgres server (psql -h dbservername -U dbusername) to see the
definition of the tables in question (\d dbtablename)?

Marvelde, Luc te wrote:
> Nothing is wrong with the data... Everything is there in the dataframe. All 
> the names are case sensitive, but the way it is spelled is correct So 
> that cant be the problem
> 
> Luc 
> 
> -Original Message-
> From: Pavel Stehule [mailto:pavel.steh...@gmail.com] 
> Sent: vrijdag 20 november 2009 15:38
> To: Marvelde, Luc te
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] Sqldf - error message
> 
> Hello
> 
> 2009/11/20 Marvelde, Luc te :
>> Hi All!
>>
>> I just discovered sqldf and im very enthousiastic, as I am a big fan 
>> of R and I often get frustrated using Access for datamanagement.
>>
>> I tried running some queries from Access in R, and it works very well.
>> However, I have problems with some queries.
>>
>> If I run this SQL query:
>>
>>> sqldf("SELECT
>> + dbo_tbl_Terrein.RingCentraleNaam,
>> + dbo_tbl_Broedsels.BroedselID
>> + FROM ((dbo_tbl_BroedselLocatie
>> + INNER JOIN dbo_tbl_Broedsels ON dbo_tbl_BroedselLocatie.BroedselID =
>> dbo_tbl_Broedsels.BroedselID)
>> + INNER JOIN dbo_tbl_Nestkasten ON dbo_tbl_BroedselLocatie.NestkastID 
>> + =
>> dbo_tbl_Nestkasten.NestkastID)
>> + INNER JOIN dbo_tbl_Terrein ON dbo_tbl_Nestkasten.TerreinNummer =
>> dbo_tbl_Terrein.TerreinNummer
>> + WHERE (((dbo_tbl_Terrein.RingCentraleNaam)='Oosterhout a/d 
>> + Waal'));")
>>
>> I get the following message:
>>
>> Error in sqliteExecStatement(con, statement, bind.data) :
>>   RS-DBI driver: (error in statement: no such column:
>> dbo_tbl_Broedsels.BroedselID)
>>
>> ( I also tried removing the 'dbo_tbl_'  part of the dataframe names, 
>> but still the same problem occurred)
> 
> I thing, you have a problem with data. Probably some creating script was 
> broken or some similar. The message is clean. Your database has not column 
> BroedselID. Recheck your query, please. Im not sure - maybe SQLite 
> identifiers are case sensitive. Maybe are defined as case sensitive.
> 
> Regards
> Pavel Stehule
> 
>> Now ofcoarse I looked on google, because I was sure someone would have 
>> had the same problem.. And yes, I found this website...
>>
>> http://code.google.com/p/sqldf/
>>
>> They say:
>> This happens when you try to use variable with a dot in it (as dots 
>> have special meaning to SQL) or if you try to use SQL92 reserved keywords.
>> SQLite/RSQLite replaces dots with underscores and changes keywords 
>> words so that they can be used. The keywords in question can be found 
>> by entering the following at the R command line:
>>
>> .SQL92Keywords
>>
>> In this list of keywords, i cannot find any word that is close to the 
>> error... So, i was wondering if someone knows how to solve it. 
>> preferably i do not want to change all variable names, as these names 
>> come from the 'big main database' of the institute.
>>
>> The site also says that SQLite/RSQLite can fix the problem... but then 
>> when i look in their help, i cannot see how i can easily use those 
>> commands...
>>
>> Anyone?
>>
>> Kind regards,
>>
>> Luc te Marvelde
>> New member :)
>>
>>
>>
>>
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 'image' table with relationships to different objects

2010-02-09 Thread Rob Sargent
You can also invert this, making all the image owner share a common base
table and then images are dependent on that base

base (id, type) where type is an enumeration or some such
person (id, name, etc) where id is FK to base id
locations (id, address, etc) where id is FK to base.id
events(id, date, etc) where id is FK to base.id
images(id, baseid) where baseid is FK to base.id

views across base to the "data" tables for easier sql if desired
ORM: person location and event would inherit from base

On 02/09/2010 04:59 AM, Richard Huxton wrote:
> On 09/02/10 07:49, Louis-David Mitterrand wrote:
>> Hello,
>>
>> In my database I have different object types (person, location, event,
>> etc.) all of which can have several images attached.
>>
>> What is the best way to manage a single 'image' table with relationships
>> to (potentially) many different object types while keeping referrential
>> integrity (foreign keys)?
> 
> The "clean" way to do this would be with a number of joining tables:
> 
> images(img_id, file_name, title ...)
> persons   (psn_id, first_name, last_name, ...)
> locations (loc_id, loc_name, lat, lon, ...)
> events(evt_id, evt_name, starts_on, ends_on, ...)
> 
> person_images   (psn_id, img_id)
> location_images (loc_id, img_id)
> event_images(evt_id, img_id)
> 
> You might then want a view over these joining tables to see what images
> go where...
> 
> CREATE VIEW all_images AS
> SELECT
>   i1.img_id,
>   i1.file_name,
>   'PERSON'::text AS link_type,
>   p.first_name || ' ' || p.last_name AS linked_name
> FROM
>   images i1
>   JOIN person_images pi ON i1.img_id = pi.img_id
>   JOIN persons p ON pi.psn_id = p.psn_id
> UNION ALL
> SELECT
>   i2.img_id,
>   i2.file_name,
>   'LOCATION'::text AS link_type,
>   l.loc_name AS linked_name
> FROM
>   images i2
>   JOIN location_images li ON i2.img_id = li.img_id
>   JOIN locations l ON li.loc_id = l.loc_id
> ...
> 
> You could do something clever with inheritance on the joining tables,
> but it's better to keep things simple imho.
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Can i force deletion of dependent rows?

2010-02-15 Thread Rob Sargent

then I think OP needs to delete A where "your x";

On 02/13/2010 12:05 AM, Tim Landscheidt wrote:

Shruthi A  wrote:


I have 2 tables (A and B) where the table B has a foreign key reference to
table A.   Like this:



create table A (x int primary key);
create table B (y int primary key, z int references A (x) );



As you can see, i've not specified any further 'on delete' clause to the
foreign key constraint like 'on delete cascade' or 'on delete restrict' or
'on delete set null' or 'on delete set default'.   This means that the
default behavior is followed ie 'on delete restrict' holds by default.



Now I want to delete a particular row from table A.  Is there any way I can
force deletion of all the dependent rows in table B?



Note that the tables have already been created in the above manner and
populated.  I want to do this deletion through DML only ie without any
change in the table definition.


"DELETE FROM B WHERE z = 'your x';"?

Tim




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Join Advice and Assistance

2010-02-21 Thread Rob Sargent

Gary Chambers wrote:

All,

I've encountered a mental block due primarily to my inexperience with
moderately complex joins.  Given the following three tables:

Table "public.users"
  Column   |  Type  |   Modifiers
---++---
 userid| bigint | not null
 lname | character varying(64)  | not null
 fname | character varying(64)  | not null
 passwd| character varying(64)  | not null
 is_active | boolean| not null default true

   Table "public.user_emailaddrs"
  Column   |  Type  |   Modifiers
---++---
 userid| bigint | not null
 emailaddr | character varying(256) | not null
 is_active | boolean| not null default true

   Table "public.usermetas"
 Column |Type |   Modifiers
+-+
 userid | bigint  | not null
 startdate  | timestamp without time zone | not null default now()
 lastlogindate  | timestamp without time zone | not null default now()
 lastpwchange   | timestamp without time zone | not null default now()
 logincount | integer | not null default 1

users and usermetas is a one-to-one relationship.
users and user_emailaddrs is a one-to-many relationship.

What is the best way to get these tables joined on userid and return
all emailaddr records from user_emailaddrs (e.g. if userid has three
(3) e-mail addresses in user_emailaddrs)?  Is there any way to avoid
returning all fields in triplicate? Please feel free to criticize
where necessary.  Thank you very much in advance.

-- Gary Chambers

/* Nothing fancy and nothing Microsoft! */

  
If you want records for user without email addresses you will need an 
outer join on user_emailaddrs


/* untested */
select u.userId, u.lname, u.lastname ,m.startdate, a.emailaddr
from users u
join usermetas m on u.userid = m.userid
left join user_emailaddrs a on m.userid = a.userid




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Join Advice and Assistance

2010-02-22 Thread Rob Sargent

My mistake.  Should answer these things late at night.

I think you will find that arrays will be your friend[s]

On 02/22/2010 08:51 AM, Gary Chambers wrote:

Rob,

Thanks for the reply...


If you want records for user without email addresses you will need an outer
join on user_emailaddrs

/* untested */
select u.userId, u.lname, u.lastname ,m.startdate, a.emailaddr
from users u
join usermetas m on u.userid = m.userid
left join user_emailaddrs a on m.userid = a.userid


My question was related more toward eliminating the query returning a
record for each record in the one-to-many table.  I see now that I'm
going to have to aggregate the e-mail addresses in order to return a
single row.  Thanks again.

-- Gary Chambers

/* Nothing fancy and nothing Microsoft! */


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] an aggregate to return max() - 1 value?

2010-03-04 Thread Rob Sargent
'select max(col) where col < max(col)' should work but you have to do 
'where col < (select max(col) '


On 03/04/2010 01:09 PM, Louis-David Mitterrand wrote:

Hi,

With builtin aggregates is it possible to return the value just before
max(col)?

Thanks,



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Odd query behavior

2010-03-12 Thread Rob Sargent
Any views involved, or separate users/roles?

On 03/12/2010 08:41 AM, Dan McFadyen wrote:
> Hello,
> 
>  
> 
> I've come across an odd situation. I've had access to a database where a
> the following happens:
> 
>  
> 
> " SELECT * FROM table WHERE name LIKE 'abc%' "returns 2 rows...
> 
>  
> 
> but...
> 
>  
> 
> " SELECT * FROM table WHERE name IN (SELECT name FROM table WHERE name
> LIKE 'abc%') "  returns 0 rows...
> 
>  
> 
> I've also tried doing a join on the column to another table that has
> what appears to be the same data, and the join comes out with zero rows.
> Weirder yet, I took the hash of both of the values and it came out
> identical.
> 
>  
> 
> Now, I know the first thing you'll ask is if you can get a copy of the
> data to re-produce it, and the problem is, I got a  copy of the data in
> question, and loaded into another server and it works fine. Both queries
> return 2 rows.
> 
>  
> 
> I do know that server it was running on was Finnish. The database uses a
> UTF8 encoding though, so I don't know what that would make any difference.
> 
>  
> 
> Any ideas about things I should be looking at?
> 
>  
> 
> Thanks
> 
>  
> 
> Dan
> 
>  
> 
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Statements and opinions expressed in this e-mail may not
> represent those of the company. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended
> recipient is prohibited. If you received this in error, please contact
> the sender immediately and delete the material from any computer. Please
> see our legal details at http://www.cryptocard.com CRYPTOCard Inc. is
> registered in the province of Ontario, Canada with Business number 80531
> 6478. CRYPTOCard Europe is limited liability company registered in
> England and Wales (with registered number 05728808 and VAT number 869
> 3979 41); its registered office is Aztec Centre, Aztec West,
> Almondsbury, Bristol, UK, BS32 4TD

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] installing uuid generators

2010-03-15 Thread Rob Sargent
Stop me if you've heard this one before :)

Given that pg_config --libdir yields "/usr/lib64"
to where/what would you expect

  "AS '$libdir/uuid-ossp', 'uuid_generate_v5'"

to resolve?

The loader script,
~/tools/postgresql-8.4.2/contrib/uuid-ossp/uuid-ossp.sql,  generates
"tools/postgresql-8.4.2/contrib/uuid-ossp/uuid-ossp.sql:9: ERROR:  could
not access file "$libdir/uuid-ossp": No such file or directory"

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] installing uuid generators

2010-03-16 Thread Rob Sargent


On 03/16/2010 03:20 AM, Richard Huxton wrote:
> On 15/03/10 23:58, Rob Sargent wrote:
>> Stop me if you've heard this one before :)
>>
>> Given that pg_config --libdir yields "/usr/lib64"
>> to where/what would you expect
>>
>>"AS '$libdir/uuid-ossp', 'uuid_generate_v5'"
>>
>> to resolve?
>>
>> The loader script,
>> ~/tools/postgresql-8.4.2/contrib/uuid-ossp/uuid-ossp.sql,  generates
>> "tools/postgresql-8.4.2/contrib/uuid-ossp/uuid-ossp.sql:9: ERROR:  could
>> not access file "$libdir/uuid-ossp": No such file or directory"
> 
> Could it be that $libdir doesn't contain uuid-ossp when the .sql script
> is being executed?
> 
> It's also worth checking that the pg_config you are executing is the one
> associated with postgresql-8.4.2 - make sure your PATH is set
> appropriately. I'm not sure that it would give this error, but I keep
> making the mistake of setting my PATH with the three versions I have on
> my dev box.
> 
Thanks for your suggestions.  You were exactly right.  I was pointed at
an 8.3 server in the hinterland, not my own.  Terribly sorry for the
noise. (Of course that fact was staring at me from the sql prompt, which
contains servername :/)

I'm still left worried about the correct procedure for getting uuid-oosp
installed properly on SUSE 11.  Does the server release's contrib
contain uuid-ossp?  I didn't see it on my desktop release.  (I don't
want to have to tell my mates to go through the issues I had building
the lib etc.)

Now I have to clean up after myself for having dropped links to that .so
everywhere!

Thanks again.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] installing uuid generators

2010-03-16 Thread Rob Sargent


On 03/16/2010 02:26 PM, Tom Lane wrote:
> Richard Huxton  writes:
>> On 16/03/10 18:08, Rob Sargent wrote:
>>> I'm still left worried about the correct procedure for getting uuid-oosp
>>> installed properly on SUSE 11.  Does the server release's contrib
>>> contain uuid-ossp?  I didn't see it on my desktop release.  (I don't
>>> want to have to tell my mates to go through the issues I had building
>>> the lib etc.)
> 
>> I'd be surprised if there wasn't a -contrib or -extras rpm with the 
>> relevant files. The community rpms should have them if the "official" 
>> Suse ones don't
> 
> If there is a contrib rpm but it doesn't seem to contain uuid-ossp,
> the likely reason is that the maintainer never got around to adding
> --with-ossp-uuid to the build options.  (I know it took me a while
> to add that to the Fedora build :-(.)  You could confirm or deny
> that theory by looking at the output of pg_config --configure.
> If so, file a bug report with SUSE asking for that to be added.
> 
>   regards, tom lane

As Tom suspected the SUSE build doesn't name ossp stuff, to wit:
$ pg_config
BINDIR = /usr/bin
DOCDIR = /usr/share/doc/packages/postgresql
HTMLDIR = /usr/share/doc/packages/postgresql
INCLUDEDIR = /usr/include/pgsql
PKGINCLUDEDIR = /usr/include/pgsql
INCLUDEDIR-SERVER = /usr/include/pgsql/server
LIBDIR = /usr/lib64
PKGLIBDIR = /usr/lib64/postgresql
LOCALEDIR = /usr/share/locale
MANDIR = /usr/share/man
SHAREDIR = /usr/share/postgresql
SYSCONFDIR = /usr/etc/postgresql
PGXS = /usr/lib64/postgresql/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--prefix=/usr' '--libdir=/usr/lib64' '--bindir=/usr/bin'
'--includedir=/usr/include/pgsql' '--datadir=/usr/share/postgresql'
'--mandir=/usr/share/man' '--docdir=/usr/share/doc/packages'
'--disable-rpath' '--enable-nls' '--enable-thread-safety'
'--enable-integer-datetimes' '--without-python' '--without-perl'
'--without-tcl' '--with-openssl' '--with-pam' '--with-krb5'
'--with-gssapi' '--with-ldap' '--with-libxml' '--with-libxslt'
'--with-system-tzdata=/usr/share/zoneinfo' 'CFLAGS=-fmessage-length=0
-O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables
-fasynchronous-unwind-tables -g '
CC = gcc
CPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2
CFLAGS = -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2
-fstack-protector -funwind-tables -fasynchronous-unwind-tables -g  -Wall
-Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement
-Wendif-labels -fno-strict-aliasing -fwrapv
CFLAGS_SL = -fpic
LDFLAGS = -Wl,--as-needed
LDFLAGS_SL =
LIBS = -lpgport -lxslt -lxml2 -lpam -lssl -lcrypto -lkrb5 -lcom_err
-lgssapi_krb5 -lz -lreadline -lcrypt -ldl -lm
VERSION = PostgreSQL 8.4.2

And I guess --without-perl explains some the machinations necessary to
get bucardo working.

Thanks to all.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strange issue with UUID data types

2010-03-17 Thread Rob Sargent


On 03/17/2010 10:29 AM, Michael Gould wrote:
> I'm running Windows 2008 64 bit server with Postgres 8.4.2 (also have
> tried Windows 7 both 32 and 64 bit).  The origin database is SQL
> Anywhere 10.
> 
> I've got several tables that have a UUID data type with
> isscontrib.uuid_generate_v4() as the default value. 
> 
> All of the tables where these are the primary key work just fine and
> properly create the UUID columns.
> 
> I also have several child tables that call into the parent table to
> return the parent tables UUID.  All of the child tables column which I'm
> returning the parent tables UUID are defined as UUID data types and no
> default value.  On all of my machines the UUID returned from the parent
> tables is always 16 bytes, in fact the numbers are not truncated they
> don't make any sense or pattern.
> 
> Now I would think that this could be a problem with my program not
> having the proper length to hold the temporary data but I've also run
> this on 2 other machines, one through teamviewer to  Windows 2008
> standard server with Postgres 8.4.2 loaded and it's schema's loaded from
> a pgdumpall. 
> 
> They are also running the same SQL Anywhere version 10 that we're
> converting the data from.  When we run the same code on that machine it
> runs properly and the UUID returned is correct.  I've tried this on his
> remote server and it works.
> 
> Is there something I'm missing here?  Is there somekind of environmental
> parameter that I need to set?  We've been scratching our heads over this
> for about 10 days now trying to figure out why it works in one location
> and not others.
> 
> Best Regards
> 
> Michael Gould

Given my recent history re: uuid's perhaps I shouldn't chime in here,
but can't help myself.

get/run  pg_config to see how postgres was built and installed.  It may
be using ossp uuid support?

2. Are all the uuid libraries 64 bit?

3. Has the programme been migrated from 32 bit?





-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Emacs sql-postgres (please, sorry for question not about PostgreSQL).

2010-03-22 Thread Hiltibidal, Rob
I recommend switching to aqua data studio

 

I can query mysql, postgres, db2, oracle with the same tool

 

 

 

From: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Dmitriy Igrishin
Sent: Thursday, March 18, 2010 4:44 PM
To: postgres list
Subject: [SQL] Emacs sql-postgres (please, sorry for question not about
PostgreSQL).

 

Hello all Emacs users!

I am using Emacs recently. I love sql-mode, to use with PostgreSQL,
but I have problems with it.
When my SQL file (or buffer) are small (50-100 lines) I can send
it to SQLi buffer without any problems. But when I working with
large SQL file (e.g. complex database model, thousands of lines)
and send it to SQLi buffer it does not work properly. Something
going on and in SQLi buffer (psql) I see incomplete SQL
statement, for example:
super=# super'# super'# super'# super'#
It seems to single quote was not closed. But SQL is absolutely
correct and loads fine when I load it from file directly from psql.
I think, the problem with Emacs buffers... Please, help!

And please sorry, for question not about PostgreSQL...

Regards,
Dmitriy Igrishin 


PRIVILEGED AND CONFIDENTIAL
This email transmission contains privileged and confidential information 
intended only for the use of the individual or entity named above.  If the 
reader of the email is not the intended recipient or the employee or agent 
responsible for delivering it to the intended recipient, you are hereby 
notified that any use, dissemination or copying of this email transmission is 
strictly prohibited by the sender.  If you have received this transmission in 
error, please delete the email and immediately notify the sender via the email 
return address or mailto:postmas...@argushealth.com.  Thank you.





Re: [SQL] Week to date function

2010-03-30 Thread Hiltibidal, Rob
U only 52 calendar weeks in a year... I'm almost sure that is the
norm



-Original Message-
From: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Ireneusz Pluta
Sent: Saturday, March 27, 2010 3:22 PM
To: Jorge Godoy
Cc: Sergey Konoplev; pgsql-sql@postgresql.org
Subject: Re: [SQL] Week to date function

Jorge Godoy pisze:
> Are you sure?
>
> http://en.wikipedia.org/wiki/ISO_8601
snip
>
> As Jan 1st, 2010 happened on a Friday, it was on week 53 of 2009.
you are right, thanks for pointing that out.

I didn't check the opposite way like this:

select date_part('week', '2010-01-01'::date);
 date_part
---
53

I need to recheck my code.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

PRIVILEGED AND CONFIDENTIAL
This email transmission contains privileged and confidential information 
intended only for the use of the individual or entity named above.  If the 
reader of the email is not the intended recipient or the employee or agent 
responsible for delivering it to the intended recipient, you are hereby 
notified that any use, dissemination or copying of this email transmission is 
strictly prohibited by the sender.  If you have received this transmission in 
error, please delete the email and immediately notify the sender via the email 
return address or mailto:postmas...@argushealth.com.  Thank you.





-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] SQL Developer accessing PostgreSQL

2010-03-30 Thread Hiltibidal, Rob
I recommend Aqua Data Studio

 

Just drop in the jdbc jar

 

 

 

From: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Snyder, James
Sent: Monday, March 29, 2010 11:34 AM
To: pgsql-sql@postgresql.org
Subject: [SQL] SQL Developer accessing PostgreSQL

 

Hello,

Is there a way to configure Oracle's SQL Developer to access a
PostgreSQL database? 

Thanks,Jim


PRIVILEGED AND CONFIDENTIAL
This email transmission contains privileged and confidential information 
intended only for the use of the individual or entity named above.  If the 
reader of the email is not the intended recipient or the employee or agent 
responsible for delivering it to the intended recipient, you are hereby 
notified that any use, dissemination or copying of this email transmission is 
strictly prohibited by the sender.  If you have received this transmission in 
error, please delete the email and immediately notify the sender via the email 
return address or mailto:postmas...@argushealth.com.  Thank you.





Re: [SQL] Table Design for Hierarchical Data

2010-04-08 Thread Rob Sargent
The "parent" node in a genealogy is the mother-father tuple, so given
that as a singularity it still fits a tree.

On 04/08/2010 12:56 AM, Achilleas Mantzios wrote:
> Στις Wednesday 07 April 2010 23:33:07 ο/η Yeb Havinga έγραψε:
>> Achilleas Mantzios wrote:
>>> Στις Wednesday 07 April 2010 11:06:44 ο/η Yeb Havinga έγραψε:
>>>   
 Achilleas Mantzios wrote:
 
> You could also consider the genealogical approach, e.g.
>
>
> The parents of any node to the root, i.e. the path of any node to the 
> root are depicted as
> parents[0] : immediate parent
> parents[1] : immediate parent of the above parent
>   
>   
 What I have more than one parent?
 
>>>
>>> Then it is no longer neither a tree, nor a hierarchical structure, but 
>>> rather a graph.
>>> This a totally different problem.
>>>   
>> My question was actually an attempt to point at the inability of what 
>> you call the 'genealogical approach' database design to store 
>> information of more than one parent.
> 
> 
> Are you suggesting that we should change our definition of trees ADT, just 
> because it does not
> fit the mere detail that humans have two parents?
> Or are you just suggesting that the "genealogical" term is inaccurate?
> 
> Take a look here: www.tetilab.com/roberto/pgsql/postgres-trees.pdf
> 
>>
>> regards,
>> Yeb Havinga
>>
>>
> 
> 
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Table Design for Hierarchical Data

2010-04-12 Thread Rob Sargent
Believe me: "ego-ma-pa" will correctly define genealogical relationships
(at least among humans).

On 04/12/2010 02:14 AM, Achilleas Mantzios wrote:
> Στις Thursday 08 April 2010 17:59:01 ο/η Rob Sargent έγραψε:
>> The "parent" node in a genealogy is the mother-father tuple, so given
>> that as a singularity it still fits a tree.
> No, because the child and parent node would be of different schema.
>>
>> On 04/08/2010 12:56 AM, Achilleas Mantzios wrote:
>>> Στις Wednesday 07 April 2010 23:33:07 ο/η Yeb Havinga έγραψε:
>>>> Achilleas Mantzios wrote:
>>>>> Στις Wednesday 07 April 2010 11:06:44 ο/η Yeb Havinga έγραψε:
>>>>>   
>>>>>> Achilleas Mantzios wrote:
>>>>>> 
>>>>>>> You could also consider the genealogical approach, e.g.
>>>>>>>
>>>>>>>
>>>>>>> The parents of any node to the root, i.e. the path of any node to the 
>>>>>>> root are depicted as
>>>>>>> parents[0] : immediate parent
>>>>>>> parents[1] : immediate parent of the above parent
>>>>>>>   
>>>>>>>   
>>>>>> What I have more than one parent?
>>>>>> 
>>>>>
>>>>> Then it is no longer neither a tree, nor a hierarchical structure, but 
>>>>> rather a graph.
>>>>> This a totally different problem.
>>>>>   
>>>> My question was actually an attempt to point at the inability of what 
>>>> you call the 'genealogical approach' database design to store 
>>>> information of more than one parent.
>>>
>>>
>>> Are you suggesting that we should change our definition of trees ADT, just 
>>> because it does not
>>> fit the mere detail that humans have two parents?
>>> Or are you just suggesting that the "genealogical" term is inaccurate?
>>>
>>> Take a look here: www.tetilab.com/roberto/pgsql/postgres-trees.pdf
>>>
>>>>
>>>> regards,
>>>> Yeb Havinga
>>>>
>>>>
>>>
>>>
>>>
>>
> 
> 
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] [ADMIN] Getting the initdb parameter values

2010-04-13 Thread Rob Sargent
As I read it, it doesn't matter what the value was originally, it's
what's in postgresql.conf _now_ that matters.

This is a resource allocation: I suspect there's no limit on how much of
your (often precious) memory you wish to set aside for this.

On 04/13/2010 07:01 AM, Satish Burnwal (sburnwal) wrote:
> What you have mentioned is already there in postgresql.conf file. I want
> to know whats the max value allowed ? Doc seems to be saying that max
> allowed value is determined during initdb. How to get this max allowed
> value ?
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Invalid message format Exception

2010-05-12 Thread Rob Sargent
I'm sure most will urge you to move to UTF-8 encoding asap.

Have you tracked down the "offending" insert statement?  Perhaps it's a
trigger trying to generate a log message?

On 05/12/2010 04:34 AM, Gnanakumar wrote:
> Hi,
> 
> Because there was no response for this question already posted in
> pgsql-j...@postgresql.org mailing list, I'm posting it here.
> 
> I'm using PostgreSQL 8.2 and my production server is based on CentOS release
> 5.2 (Final).
> 
> JDBC Jar: postgresql-8.2-508.jdbc4.jar
> 
> I noticed from my server log that some INSERT statements are failing with
> "invalid message format" PSQLException.  Once this exception is thrown, it
> is not committed to the database.  What could be the reason for this?
> 
> EXCEPTION
> org.postgresql.util.PSQLException: ERROR: invalid message format
> 
> STACKTRACE
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorI
> mpl.java:1592)
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.ja
> va:1327)
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:193)
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.j
> ava:452)
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2St
> atement.java:337)
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2State
> ment.java:283)
> 
> I also read about this on:
> http://archives.postgresql.org/pgsql-jdbc/2008-04/msg00085.php
>   "JDBC driver is doing something wrong when calculating a message
> length or message content for the frontend/backend protocol"
>   But how do I resolve this issue?
> http://archives.postgresql.org/pgsql-jdbc/2004-03/msg00142.php
>   you can't represent a \0 byte in a text/varchar constant
> 
> Even though I don't have much idea on encoding, I have a little doubt on
> whether encoding is a problem.
> 
> Here is my database encoding:
> mydb=# \l
>   List of databases
>  Name |  Owner   | Encoding
> --+--+---
>  mydb | zoniac   | SQL_ASCII
> 
> Any pointers in right direction are appreciated.
> 
> 
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Invalid message format Exception

2010-05-13 Thread Rob Sargent


On 05/13/2010 01:04 AM, Gnanakumar wrote:
> Hi Rob,
> 
>> I'm sure most will urge you to move to UTF-8 encoding asap.
> Did you mean the database encoding to changed from SQL_ASCII to UTF-8?
> 
Yes.  That's pretty much the standard now.  I think it's Postgres'
default installation now (but don't quote me on that).

>> Have you tracked down the "offending" insert statement?  Perhaps it's a
>> trigger trying to generate a log message?
> No, I don't have any trigger on this table for INS/UPD/DEL.
> 
I think you'll need to track down example of inserts which are causing
the problem.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] safely exchanging primary keys?

2010-05-24 Thread Rob Sargent
And relying on keys for a sort order is a very wrong tree :)

On 05/24/2010 08:05 AM, Louis-David Mitterrand wrote:
> On Mon, May 24, 2010 at 10:51:01AM +0200, Louis-David Mitterrand wrote:
>> Hi,
>>
>> I have this function which swaps primary keys for cabin_types (so that
>> id_cabin_type ordering reflects natural data ordering):
> 
> Actually this function works fine. My problem was elsewhere. Sorry for
> barking up the wrong tree.
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to construct sql

2010-06-02 Thread Hiltibidal, Rob
db2 has a group by rollup function.. does this exist in postgres?

-Original Message-
From: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Oliveiros
Sent: Wednesday, June 02, 2010 11:55 AM
To: Wes James; pgsql-sql@postgresql.org
Subject: Re: [SQL] how to construct sql

Hi,
Have you already tried this out?

select MAX(page_count_count) - MIN(page_count_count)  
from page_count 
group by page_count_pdate.


Best,
Oliveiros

- Original Message - 
From: "Wes James" 
To: 
Sent: Wednesday, June 02, 2010 5:48 PM
Subject: [SQL] how to construct sql


>I am grabbing a printer total and putting it in a table.  The
> page_count is continuously increasing:
> 
> page_count_countpage_count_pdate
> 10   2010-05-10
> 20   2010-05-10
> 40   2010-05-11
> 60   2010-05-11
> 80   2010-05-11
> 100   2010-05-12
> 120   2010-05-12
> .
> 
> and so on.
> 
> I can do:
> 
> select sum(page_count_count) from page_count group by
page_count_pdate.
> 
> and get a total for a day.  But this is not the total I want.  I want
> the total page count for the day.  This would mean getting the first
> page count of the day and then subtracting that from last page_count
> for the day.  For 2010-05-11 above it would be
> 
> 80 - 40 = 40 total for the day.  Is there a way to do this with sql?
> 
> thx,
> 
> -wes
> 
> -- 
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

PRIVILEGED AND CONFIDENTIAL
This email transmission contains privileged and confidential information 
intended only for the use of the individual or entity named above.  If the 
reader of the email is not the intended recipient or the employee or agent 
responsible for delivering it to the intended recipient, you are hereby 
notified that any use, dissemination or copying of this email transmission is 
strictly prohibited by the sender.  If you have received this transmission in 
error, please delete the email and immediately notify the sender via the email 
return address or mailto:postmas...@argushealth.com.  Thank you.





-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problem with pg_try_advisory_lock and two connections (seemingly) getting the same lock

2010-07-20 Thread Rob Sargent
You could implement an optimistic lock strategy by placing a 'version'
column in the table and increment it on successful 'check-out' and test
against the value the user has as he/she tried to act on the record.  If
some else got there first the second user fails to check-out the queue
item.  Triggers could do the work.

On 07/19/2010 08:06 PM, Brett Hoerner wrote:
> Hi,
> 
> I currently have a simple queue written ontop of Postgres.  Jobs are
> inserted and workers periodically check for jobs they can do, do them,
> and then delete the rows.  pg_try_advisory_lock is used to (attempt
> to) stop two workers from doing the same job.
> 
> (I'm working on moving to a "real" messaging queue right now, this is
> more a point of curiosity and education now.)
> 
> Here is my queue table,
> 
> CREATE TABLE queue (
> id serial NOT NULL PRIMARY KEY,
> rcvd timestamp with time zone,
> sent timestamp with time zone,
> host character varying(32),
> job character varying(32),
> arg text
> );
> 
> Here is an example query,
> 
> SELECT q.*
> FROM (SELECT id, job, arg
>   FROM queue
>   WHERE job = 'foo' OR job = 'bar'
>   OFFSET 0) AS q
> WHERE pg_try_advisory_lock(1, q.id)
> LIMIT 10
> 
> (For information on OFFSET 0 see:
> http://blog.endpoint.com/2009/04/offset-0-ftw.html)
> 
> Now if I have two workers running I will periodically see that each
> worker gets a row with the same q.id (and thus does the work).  How is
> that possible?  The outer query seemingly does a WHERE on an
> advisory_lock.
> 
> Does anyone have any ideas?  Am I grossly misusing advisory_locks?
> 
> Thanks,
> Brett
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] pg_config -less

2010-09-23 Thread Rob Sargent
A local installation of 9.0 does not seem to include pg_config. (not
with pg_dump pg_ctl etc, no man page)

This is a Suse box (openSUSE 11.2 (x86_64)).

Is it possible to dig around for the info returned from pg_config
--configure (especially uuid support)?

Thanks.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] pg_config -less

2010-09-23 Thread Rob Sargent
Absolutely correct.  The dev package was later installed so I got my
answer (no real uuid support) but I was wondering if it was possible to
get that sort of info from psql directly.

On 09/23/2010 08:49 AM, Tom Lane wrote:
> Rob Sargent  writes:
>> A local installation of 9.0 does not seem to include pg_config. (not
>> with pg_dump pg_ctl etc, no man page)
> 
>> This is a Suse box (openSUSE 11.2 (x86_64)).
> 
> Most likely, Suse's packager decided to put it in the postgresql-devel
> subpackage (or maybe they spell it postgresql-dev or something else).
> It should certainly be available somewhere from them --- if not,
> file a packaging bug report.
> 
>   regards, tom lane
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] pg_config -less

2010-09-23 Thread Rob Sargent
And while on the topic of uuid (again), building postgres 9 from source
seems to transpose the library name: libossp-uuid v. libuuid-ossp.  I
had to put in a simlink to get configure to agree I had the library (rev
1.6.2 from ossp.org)


On 09/23/2010 08:49 AM, Tom Lane wrote:
> Rob Sargent  writes:
>> A local installation of 9.0 does not seem to include pg_config. (not
>> with pg_dump pg_ctl etc, no man page)
> 
>> This is a Suse box (openSUSE 11.2 (x86_64)).
> 
> Most likely, Suse's packager decided to put it in the postgresql-devel
> subpackage (or maybe they spell it postgresql-dev or something else).
> It should certainly be available somewhere from them --- if not,
> file a packaging bug report.
> 
>   regards, tom lane
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Duplicates Processing

2010-10-08 Thread Rob Sargent

On 10/08/2010 01:42 PM, Gary Chambers wrote:
> Tim,
> 
> Thanks for taking the time to reply!
> 
>> | INSERT INTO substitutes ([...])
>> |   SELECT [...] FROM
>> | (SELECT *,
>> | ROW_NUMBER() OVER (PARTITION BY wattage, tolerance, temperature
>> | ORDER BY part_number) AS RN
>> |  FROM parts) AS SubQuery
>> |   WHERE RN > 1;
> 
>> | DELETE FROM parts
>> | WHERE primary_key IN
>> |   (SELECT primary_key FROM
>> | (SELECT *,
>> | ROW_NUMBER() OVER (PARTITION BY wattage, tolerance, temperature
>> |ORDER BY part_number) AS RN
>> |  FROM parts) AS SubQuery
>> |WHERE RN > 1);
> 
> You have solved the problem precisely as I described it.  In my haste
> to make the request for assistance, I omitted one critical piece of
> information that may call into question my data model.  In its current
> state, my substitute parts table contains only the part number (the
> "new" one, so-to-speak), a foreign key reference to the original parts
> table, and some location data (which is also in the original parts
> table).  Is there any advice you can offer in light of what I have
> just described? I apologize for the oversight.
> 
> -- Gary Chambers
> 

Perhaps a trade off between nullable fields and redundant types.  If
your original table simply had a nullable column called
isReplacementFor, into which you place in the subsequent rows the id of
the first instance found.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Duplicates Processing

2010-10-08 Thread Rob Sargent
Yes.  With this you can find all part numbers/supplies which match your
value, wattage criteria in one table. Or exclude any which have a
non-null is_replacement_for value.

If you need to drop the "replaceable" variant, you choose which of the
replacements to promote and update the others to match the new
"replaceable".  They're all instances of the same type of thing so in my
view they ought to live in the same table.


Also possible to maintain the replacement structure via a (self) join
record with replacable/is_replacement_for tuples.  You have a similar
but slightly more involve maintenance issue of course.



On 10/08/2010 02:42 PM, Gary Chambers wrote:
> Rob,
> 
>> Perhaps a trade off between nullable fields and redundant types.  If
>> your original table simply had a nullable column called
>> isReplacementFor, into which you place in the subsequent rows the id of
>> the first instance found.
> 
> Am I misunderstanding you when you're suggesting a table like:
> 
> part_number   INTEGER
> is_replacement_for INTEGER references part_number
> value   INTEGER
> wattage   FLOAT8
> ...
> 
> -- Gary Chambers

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Duplicates Processing

2010-10-08 Thread Rob Sargent

My understanding was that the values were in fact in the data of the
"replacers".  If not, you are correct.  In this case the replacers are
more like alias for the only instance you have.

If the replacers are immutable by all means ship them off to some other
table (where I suppose the become pointers to other suppliers of the
type of thing holding the real data). Not sure I've ever met immutable
data but there you go ;)

And to your point of self-reference, it would be to a co-worker more
than a manager.  Managers are often not good replacements for workers. :)


On 10/08/2010 04:12 PM, Gary Chambers wrote:
> Rob,
> 
>> Yes.  With this you can find all part numbers/supplies which match your
>> value, wattage criteria in one table. Or exclude any which have a
>> non-null is_replacement_for value.
> 
> I understand -- thanks.  I have received contradictory advice in a
> purely data modeling context.  What about the null values that will be
> in the properties columns of the part?  It would appear to be more
> applicable to an employee database where the columns are populated
> regardless and the "replacement_for" in the context of our discussion
> would be a self-reference to the employee's manager.  No?
> 
> Thanks again for your help.
> 
> -- Gary Chambers

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Duplicates Processing

2010-10-12 Thread Rob Sargent
Gross generalization perhaps, but keep in mind what the over app/system
needs of the components.  Bounce those off you standard ER modeling
instincts and vice versa and you have a chance!

On 10/12/2010 08:19 AM, Gary Chambers wrote:
> Rob,
> 
> Thanks for your reply!
> 
>> And to your point of self-reference, it would be to a co-worker more
>> than a manager.  Managers are often not good replacements for workers. :)
> 
> :)  Absolutely!
> 
> I was having a conversation over on #postgresql yesterday about this
> and, due to my inexperience with managing electronic components (and
> some of the database issues involved), I still have a few unanswered
> questions.  I would like to create a master table of parts derived
> from the individual tables (e.g. resistors, capacitors, diodes, etc.).
>  I have the rare opportunity to build this from the ground, up and
> would like to ensure that I get it right.  Thanks for any advice that
> you (or anyone) can offer.
> 
> -- Gary Chambers

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] large xml database

2010-10-30 Thread Rob Sargent



Andreas Joseph Krogh wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 10/30/2010 11:49 PM, Viktor Bojović wrote:
  

Hi,
i have very big XML documment which is larger than 50GB and want to import
it into databse, and transform it to relational schema.
When splitting this documment to smaller independent xml documments i get
~11.1mil XML documents.
I have spent lots of time trying to get fastest way to transform all this
data but every time i give up because it takes too much time. Sometimes more
than month it would take if not stopped.
I have tried to insert each line as varchar into database and parse it using
plperl regex..
also i have tried to store every documment as XML and parse it, but it is
also to slow.
i have tried to store every documment as varchar but it is also slow when
using regex to get data.

many tries have failed because 8GB of ram and 10gb of swap were not enough.
also sometimes i get that more than 2^32 operations were performed, and
functions stopped to work.

i wanted just to ask if someone knows how to speed this up.

thanx in advance



Use a SAX-parser and handle the endElement(String name) events to insert
the element's content into your db.

  
If you still have the 11 million subfiles, I would start there, sax 
parse as above and maybe make make csv files, then load those with bulk 
as begin/end transaction on each data item discovered will hurt.


Can the subfiles be segregated into specific data types, or at least 
holder of specific data types such that they releate to a specific 
subset of your new db/schema?  This will play into what get's loaded 
first and who depends on whom w.r.t. foreign keys etc.


You can parallelize marginally with multiple threads (hoping to split 
file read from sax paring from element construction from save/network) 
but more boxes would be the way to go. Partitioning remains a problem.



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] large xml database

2010-10-31 Thread Rob Sargent




Viktor Bojovic' wrote:



On Sun, Oct 31, 2010 at 2:26 AM, James Cloos > wrote:


> "VB" == Viktor Bojovic' mailto:viktor.bojo...@gmail.com>> writes:

VB> i have very big XML documment which is larger than 50GB and
want to
VB> import it into databse, and transform it to relational schema.

Were I doing such a conversion, I'd use perl to convert the xml into
something which COPY can grok. Any other language, script or compiled,
would work just as well. The goal is to avoid having to slurp the
whole
xml structure into memory.

-JimC
--
James Cloos mailto:cl...@jhcloos.com>>
OpenPGP: 1024D/ED7DAEA6


The insertion into dabase is not very big problem.
I insert it as XML docs, or as varchar lines or as XML docs in varchar 
format. Usually i use transaction and commit after block of 1000 
inserts and it goes very fast. so insertion is over after few hours.
But the problem occurs when i want to transform it inside database 
from XML(varchar or XML format) into tables by parsing.
That processing takes too much time in database no matter if it is 
stored as varchar lines, varchar nodes or XML data type.


--
---
Viktor Bojovic'
---
Wherever I go, Murphy goes with me


Are you saying you first load the xml into the database, then parse that 
xml into instance of objects (rows in tables)?



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] large xml database

2010-10-31 Thread Rob Sargent



Viktor Bojović wrote:



On Sun, Oct 31, 2010 at 9:42 PM, Rob Sargent <mailto:robjsarg...@gmail.com>> wrote:





Viktor Bojovic' wrote:



On Sun, Oct 31, 2010 at 2:26 AM, James Cloos
mailto:cl...@jhcloos.com>
<mailto:cl...@jhcloos.com <mailto:cl...@jhcloos.com>>> wrote:

   >>>>> "VB" == Viktor Bojovic' mailto:viktor.bojo...@gmail.com>

   <mailto:viktor.bojo...@gmail.com
<mailto:viktor.bojo...@gmail.com>>> writes:

   VB> i have very big XML documment which is larger than 50GB and
   want to
   VB> import it into databse, and transform it to relational
schema.

   Were I doing such a conversion, I'd use perl to convert the
xml into
   something which COPY can grok. Any other language, script
or compiled,
   would work just as well. The goal is to avoid having to
slurp the
   whole
   xml structure into memory.

   -JimC
   --
   James Cloos mailto:cl...@jhcloos.com>
<mailto:cl...@jhcloos.com <mailto:cl...@jhcloos.com>>>

   OpenPGP: 1024D/ED7DAEA6


The insertion into dabase is not very big problem.
I insert it as XML docs, or as varchar lines or as XML docs in
varchar format. Usually i use transaction and commit after
block of 1000 inserts and it goes very fast. so insertion is
over after few hours.
But the problem occurs when i want to transform it inside
database from XML(varchar or XML format) into tables by parsing.
That processing takes too much time in database no matter if
it is stored as varchar lines, varchar nodes or XML data type.

-- 
---

Viktor Bojovic'

---
Wherever I go, Murphy goes with me


Are you saying you first load the xml into the database, then
parse that xml into instance of objects (rows in tables)?


Yes. That way takes less ram then using twig or simple xml, so I tried 
using postgre xml functions or regexes.




--
---
Viktor Bojović
---
Wherever I go, Murphy goes with me
Is the entire load a set of "entry" elements as your example contains?  
This I believe would parse nicely into a tidy but non-trivial schema 
directly without the "middle-man" of having xml in db (unless of course 
you prefer xpath to sql ;) )


The single most significant caveat I would have for you is Beware: 
Biologists involved. Inconsistency (at least overloaded concepts)  
almost assured :).  EMBL too is suspect imho, but I've been out of that 
arena for a while.






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] large xml database

2010-10-31 Thread Rob Sargent

Skipping much of the included thread, urgently.

btw.
you have mentioned "This I believe would parse nicely into a tidy but 
non-trivial schema directly", does it mean that postgre has a support 
for restoring the database schema from xml files?


--
---
Viktor Bojović
---
Wherever I go, Murphy goes with me
No. Sorry.  Did not mean to imply that.  If you had a xsd file from them 
you might have a better chance.  Have never looked but I but someone has 
tried that sort of manipulation.  Throw out the cardinality constraints 
of course :).


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] indexing longish string

2010-11-30 Thread Rob Sargent
Were we to create a table which included a text field for a small block
of xml (100-1000 chars worth), would an index on that field be useful
against exact match queries?

We're wondering if a criterion such as "where 'a string expected to be
of size range 100 to 500' = tabelWithStrings.stringSearched" would make
good use of an index on "stringSearched" column.

The key is that there will only be exact match queries.

Thanks for your thoughts.



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] indexing longish string

2010-11-30 Thread Rob Sargent
If the performance against an index doesn't cut it, we would be forced
to choose just such an implementation, but if pg can do it straight up
that would be less work for us.  A good thing, to be sure.

On 11/30/2010 10:50 AM, jose wrote:
> Why don't you use some type of hash like md5 for indexing ?
> 
> 2010/11/30 Rob Sargent :
>> Were we to create a table which included a text field for a small block
>> of xml (100-1000 chars worth), would an index on that field be useful
>> against exact match queries?
>>
>> We're wondering if a criterion such as "where 'a string expected to be
>> of size range 100 to 500' = tabelWithStrings.stringSearched" would make
>> good use of an index on "stringSearched" column.
>>
>> The key is that there will only be exact match queries.
>>
>> Thanks for your thoughts.
>>
>>
>>
>> --
>> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-sql
>>
> 

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using count on a join, group by required?

2010-12-14 Thread Rob Sargent

Shouldn't you be doing your own homework?

emaratiyya wrote:

Hi,Please help me solving this problem. I appreciate..Thankyou.

Create the following table and insert few arbitrary records. 
Product (product_id, product_name, supplier_name, quantity, price_per_unit)

You are required to create PL/SQL package that achieves the following
functionalities:

•   Obtaining the product supplier name by using the product_id
•   Changing the price_per_unit by using the product_id
•   When changing the price of the product, you have to have PL/SQL code 
that
keeps the history of the prices. For this functionality you need to create
audit table: product_audit(product_id, old_price, new_price, date_of_change)
•   Updating the quantity by using the product_id
•   Get warning when the product level of stock (quantity) goes below 
certain
threshold (e.g. 10)

  


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using count on a join, group by required?

2010-12-14 Thread Rob Sargent
If you showed your work, you might get decent hints if not solutions.

On 12/14/2010 09:23 AM, Rob Sargent wrote:
> Shouldn't you be doing your own homework?
> 
> emaratiyya wrote:
>> Hi,Please help me solving this problem. I appreciate..Thankyou.
>>
>> Create the following table and insert few arbitrary records. Product
>> (product_id, product_name, supplier_name, quantity, price_per_unit)
>> You are required to create PL/SQL package that achieves the following
>> functionalities:
>>
>> •Obtaining the product supplier name by using the product_id
>> •Changing the price_per_unit by using the product_id
>> •When changing the price of the product, you have to have PL/SQL
>> code that
>> keeps the history of the prices. For this functionality you need to
>> create
>> audit table: product_audit(product_id, old_price, new_price,
>> date_of_change)
>> •Updating the quantity by using the product_id
>> •Get warning when the product level of stock (quantity) goes below
>> certain
>> threshold (e.g. 10)
>>
>>   

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] DELETE FROM takes forever

2011-02-11 Thread Hiltibidal, Rob
Even DB2 and Oracle will take hellishly long times to perform large
scale deletes

What I do for a database just under 300 gb in size is do deletes in
groups of 10,000

So your where clause might look some like 

WHERE id NOT IN  (SELECT id FROM unique_records fetch first 1 rows
only)

DB2 has a clause of "with ur" to specify its ok to use dirty reads. I am
not sure if postgres has this, been awhile. The goal is to make sure
postgres allows "dirty reads". It prevents row locking... 

In DB2 the query would like like:
DELETE FROM records WHERE id NOT IN  (SELECT id FROM
unique_records fetch first 1 rows only) with ur

Other tips that might enhance the performance is make sure the
unique_records table is indexed... even if it has a primary key. In some
cases the optimizer may choose an index to satisfy the select clause or
it may do a table scan. Table scans are more costly than index scans. 

What's going to save you the real time is to break up your delete into
chunks. All the rdbms log the transactions and each delete is a
transaction. See where this is going?

Some rdbms allow you to turn off "transactional logging" some don't. DB2
doesn't (( at least not without more effort than reasonably necessary ))
so I write my delete queries to use chunks at a time. The most I would
recommend is 100,000 records deleted at once. Play with timing and see
what works for you

Hope this helps

-Rob



-Original Message-
From: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Josh
Sent: Thursday, February 10, 2011 11:57 AM
To: pgsql-sql@postgresql.org
Subject: [SQL] DELETE FROM takes forever

Hi

I'm trying to do a DELETE FROM on my large table (about 800 million
rows) based on the contents of another, moderately large table (about
110 million rows). The command I'm using is:

DELETE FROM records WHERE id NOT IN (SELECT id FROM unique_records);

This process ran for about two weeks before I decided to stop it -- it
was dragging down the DB server. I can understand long-running
processes, but two weeks seems a bit much even for a big table.

Is this the best way to approach the problem? Is there a better way?

Some background: The server is version 8.3, running nothing but Pg.
The 'records' table has 'id' as its primary key, and one other index
on another column. The table is referenced by just about every other
table in my DB (about 15 other tables) via foreign key constraints,
which I don't want to break (which is why I'm not just recreating the
table rather than deleting rows). Most of the dependent tables have ON
DELETE CASCADE. The 'unique_records' table is a temp table I got via
something like: SELECT DISTINCT (other_column) id INTO unique_records
FROM records


Thanks very much!

Josh Leder

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


PRIVILEGED AND CONFIDENTIAL
This email transmission contains privileged and confidential information 
intended only for the use of the individual or entity named above.  If the 
reader of the email is not the intended recipient or the employee or agent 
responsible for delivering it to the intended recipient, you are hereby 
notified that any use, dissemination or copying of this email transmission is 
strictly prohibited by the sender.  If you have received this transmission in 
error, please delete the email and immediately notify the sender via the email 
return address or mailto:postmas...@argushealth.com.  Thank you.





-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


  1   2   >