Re: [SQL] CREATE INDEX with order clause

2006-02-03 Thread Ragnar
On Wed, 2006-02-01 at 10:46 -0500, Daniel Caune wrote:
> Hi,
> 
[snip need for reverse-sort operator class]

 
> 
> SELECT GAME_CLIENT_VERSION
>   FROM GSLOG_EVENT
>   WHERE PLAYER_USERNAME = ?
> AND EVENT_NAME = ?
> AND EVENT_DATE_CREATED < ?
>   ORDER BY EVENT_DATE_CREATED DESC
>   LIMIT 1

>  
> 
> Actually, I’m not sure that is useful; perhaps PostgreSQL handles
> pretty well such query using an index such as:
> CREATE INDEX IDX_GSLOG_EVENT_PLAYER_EVENT_TIME_DESC
>   ON GSLOG_EVENT(PLAYER_USERNAME,
>  EVENT_NAME,
>  EVENT_DATE_CREATED);
>  
> 
> Any idea?

does index work with:
  SELECT GAME_CLIENT_VERSION 
 FROM GSLOG_EVENT
 WHERE PLAYER_USERNAME = ?
   AND EVENT_NAME = ?
   AND EVENT_DATE_CREATED < ?
 ORDER BY PLAYER_USERNAM DESC,
  EVENT_NAME DESC,
  EVENT_DATE_CREATED DESC
 LIMIT 1

gnari



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] to count no of columns in a table

2006-02-16 Thread Ragnar
On fim, 2006-02-16 at 14:43 +0530, AKHILESH GUPTA wrote:
> i just want to know one thing that is there any function in PGSQL
> which gives me the total number of columns in a table.
> OR
> just like we are using count(*), it gives us total number or rows in a
> table, just like i want to know the total number of columns present in
> the table

this is the kind of thing the information_schema is for

  select count(*) from information_schema.columns 
  where table_name='x';


gnari



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Dump/restore comments only?

2006-02-27 Thread Ragnar
On mán, 2006-02-27 at 13:56 +1100, Bath, David wrote:

> There have been a number of times when I've wanted to ignore everything
> in a dump file apart from comments.  I am interested not just in comments
> for tables/views/columns but constraints, indices and functions as well.
> 
> Many of my comments are multi-line and often resemble manpages, so a
> simple grep for '^COMMENT ON' will not work for me.
> 
> Has anyone got a script/view in SQL that will generate fully-qualified
> COMMENT ON statements from the pg_catalog/information_schema, a
> sed|awk|perl script to munge the ascii file produced by pg_dump,
> or even pg_dump undocumented options that might help me?

You might find it simpler to do this if you use the 
custom format. Something like :

pg_dump -Fc mydat > mydat.dump
pg_restore -l mydat.dump > mydat.toc
grep COMMENT mydat.toc > mydat.comments.toc
pgrestore -L mydat.comments.toc mydat.dump


gnari



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Sequential scan where Index scan expected (update)

2006-03-03 Thread Ragnar
On fim, 2006-03-02 at 23:28 -0800, Bryce Nesbitt wrote:
> I'm getting sequential scans (and poor performance), on scans using my
> primary keys.  This is an older postgres.
> Can anyone help figure out why?
> 
> 
> demo=# \d xx_thing
>  Table "public.xx_thing"
>  Column  |Type | Modifiers
> -+-+---
>  thing_id  | bigint  | not null
...
> demo=# explain update xx_thing_event set thing_color='foo' where
> thing_event_id=1;
>  QUERY PLAN
> -
>  Seq Scan on xx_thing_event  (cost=0.00..5842.48 rows=1 width=110)
...
> demo=# select * from version();
>  version
> --
>  PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3

try one of:
  ... WHERE thing_event_id=1::bigint
  ... WHERE thing_event_id='1'

or upgrade to >= 8.0

your problem is that the 1 in 'thing_event_id=1' is parsed as an
integer but not a bigint, so the planner
does not find a matching index. so you either have to
scpecify ::bigint or ::int8 in the query or quote the number, which will
cause postgres to cast it to the 
column's type.

if you try 
  ... WHERE thing_event_id=100
you will see the index used because this number cannot 
be a int4 so is parsed as a bigint.

newer versions of Postgresql can deal with this.

gnari





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Why do I get these results?????

2006-03-03 Thread Ragnar
On fös, 2006-03-03 at 09:50 +0100, Joost Kraaijeveld wrote:
> Hi,
> 
> Why do I get the following result from the query below? I expected that,
> given the fact that there are over 100 "Jansen" (but no "jansen") in
> "Nijmegen" the first record would definitively be people living in
> "Nijmegen". If I change the order to the order that is commented out,
> the query goes OK.
> 
> SELECT 
> addresses.zipcode, 
> addresses.city, 
> addresses.housenumber,
> addresses.housenumberdetails, 
> customers.lastname 
> FROM prototype.customers JOIN prototype.addresses ON
> customers.contactaddress = addresses.objectid
> WHERE
> TRIM(UPPER(customers.lastname)) >= TRIM(UPPER('Jansen'))
> AND
> TRIM(UPPER(addresses.city)) >= TRIM(UPPER('NIJMEGEN'))
> ORDER BY customers.lastname, addresses.city, addresses.zipcode
> --ORDER BY addresses.city, customers.lastname, addresses.zipcode
> limit 5
> 
> Result:
> 
> "3089TN";"ROTTERDAM";"5";"";"jansen"
> "5712XG";"SOMEREN";"13";"";"jansen"
> "6511PS";"NIJMEGEN";"23";"";"Jansen"
> "6523RE";"NIJMEGEN";"13";"";"Jansen"
> "6524NP";"NIJMEGEN";"37";"A";"Jansen"

What LOCALE are you using?
Looks like it either sorts lowercase before uppercase
or treats them as equivalent.


Why do you not provide us with a simple test case?
Why involve a join and irrelevant columns?

What does a simple test like this do for you?:

test=# create table t (c text, n text);
CREATE TABLE
test=# insert into t values ('ROTTERDAM','jansen');
INSERT 33566780 1
test=# insert into t values ('SOMEREN','jansen');
INSERT 33566781 1
test=# insert into t values ('NIJMEGEN','Jansen');
INSERT 33566782 1
test=# insert into t values ('NIJMEGEN','Jansen');
INSERT 33566783 1
test=# insert into t values ('NIJMEGEN','Jansen');
INSERT 33566784 1
test=# select * from t ORDER BY n,c;
 c |   n
---+
 NIJMEGEN  | Jansen
 NIJMEGEN  | Jansen
 NIJMEGEN  | Jansen
 ROTTERDAM | jansen
 SOMEREN   | jansen
(5 rows)

(this in in C locale)

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] is an explicit lock necessary?

2006-05-04 Thread Ragnar
On fim, 2006-05-04 at 11:10 -0700, Ash Grove wrote:
> Hi,
> 
> Does beginning a transaction put locks on the tables
> queried within the transaction?
> 
> In the example below, is #2 necessary? My thought was
> that I would need to use an explicit lock to make sure
> that the sequence value I'm selecting in #4 is the
> same one that is generated from #3. I'm worried about
> another instance of the application doing an insert on
> table1 between #3 and #4.
> 
> 1) From my app, I turn off autocommit.
> 2) I lock table1 in access exclusive mode
> 3) I do an insert into table1 which generates a
> primary key via nextval on sequence1
> 4) I grab grab the primary key value via currval on
> sequence1
> 5) I do an insert on table2 which includes table1's
> primary key so I can join the records later.
> 6) I manually commit

No. The locking is not necessary.

currval is defined to return the value most recently obtained by nextval
for this sequence in the current session. 

see:
http://www.postgresql.org/docs/8.1/interactive/functions-sequence.html

gnari




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] i need solution to this problem

2006-06-28 Thread Ragnar
On mið, 2006-06-28 at 11:15 +0530, Penchalaiah P. wrote:

> I have tables like 1) emp_table (personal_no integer (foreign key),
> cdacno varchar (primary key),name varchar);

> 2) Rank_table (rank_id varchar (primary key), rank_name varchar);

> 3) Rank_date_table (rank_id (foreign key), rank_date date);

> 4) Unit_table (unit_id varchar (primarykey), unit_name varchar);

> 5) Personal_table (per_no varchar (primary key), pername varchar);

> My query is ….if I give cdacno I have to get per_no from
> personal_table.. With this I have to display rank_name from
> rank_table ,name from emp_table, unit_name from unit_master..

it is not clear what the relationships are between the tables. for
example what is the foreign key to unit_table?

how does the rank connect to emp_table or personal_table?

gnari

> 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Is it possible to left join based on previous joins

2006-09-07 Thread Ragnar
On fim, 2006-09-07 at 10:02 -0400, Emi Lu wrote:
> I tried the example as the following:
> 
> create table a(col1);
> create table b(col1, col2)
> 
> select a.*
> from a inner join b using(col2)
> left join b.col2 as c on (c.col1 = a.col1)
> 
> System notifies me that b is not a schema name.
> 
> So, I guess the approach that I tried to do is not acceptable by Pgsql 
> grammar.

it seems to me that basically you want to
select from a table whose name is stored in another
table.

one way to do that would be to use plpgsql or
some other procedural language to create
a set returning function, so that you
could do:
SELECT * from selfromtab('sometable');

after that is done,you might be able to use that
in joins

gnari







> 
> 
> > 
> > Is it possible to do something like:
> > 
> > select ...
> > from t1
> > inner join t2 ...
> > left join t2.colN
> > 
> > When t1 inner join with t2 I got unique result for t2.colN( colN's value 
> > is table name).
> > 
> > Can I continue to left join with the column "colN" in table t2?
> > 
> > Thanks
> > 
> > 
> > 
> > 
> > ---(end of broadcast)---
> > TIP 4: Have you searched our list archives?
> > 
> >   http://archives.postgresql.org
> 
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] [HACKERS] lower() not working correctly...?

2006-09-15 Thread Ragnar
On fös, 2006-09-15 at 09:52 +0200, Andreas Joseph Krogh wrote:
> I have the following query:
> 
> select lower(firstname) || ' ' || lower(lastname) from person
> 
> firstname and lastname are VARCHAR
> 
> lower() returns NULL when firstname OR lastname is NULL, is this correct?

In fact, your problem has nothing to do with lower().

You probably did not expect the || operator
to return NULL when any of the operands is NULL

as seen in 
select firstname || ' ' || lastname from person

so , as someone else has already mentioned,
you should use coalesce.

gnari



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Dividing results from two tables with different time

2006-09-17 Thread Ragnar
On fös, 2006-09-15 at 10:34 -0500, Becky Hoff wrote:
> I hope I can explain this clearly.  


Not clear enough for me.


> I have two queries I’m running in a report.
> The first one is:

[snip]

> No matter which query I place them in it gives me the wrong data
> because of the time frames.  How can I get the correct
> data?


It is not clear what you consider correct data.

I suggest you devise a simple test case, with just
the minimum number of tables and columns needed
to explain your problem, show us a small data set
and describe your wanted output.

For example, avoid joins in your test case, unless
they are central to your problem

gnari



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] tree-structured query

2006-09-30 Thread Ragnar
On fös, 2006-09-29 at 15:00 -0700, chester c young wrote:
> in a simple tree structured table
> 
> table t(
>   id primary key,
>   pnt_id references t( id ),
>   name
> );
> 
> does anyone know an easy howbeit sneaky way of determining ancestory
> and decendency without recursive functions,

how about
CREATE TABLE ancestry (
  ans_id int, desc_id int
)

for each record of t ,
  for each ancestor of id,
insert a record (ans_id,id) into anscestry


this can be maintained by application, or by triggers.

to get all ancestors of a particular id X:
SELECT name from t JOIN ancestry ON (id=ans_id) 
  WHERE desc_id=X;

to get descendents:
SELECT name from t JOIN ancestry ON (id=desc_id) 
  WHERE ans_id=X;

indexes on ancestry(ans_id) and ancestry(desc_id)
might be called for.

hope this helps
gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] UNICODE and PL/PGSQL

2006-11-27 Thread Ragnar
On mán, 2006-11-27 at 11:52 +0100, Bart Degryse wrote:
> Dear all,
> This works: SELECT '\x65'; => it returns the letter 'e'.

yes, but:
test=# select '\x'||'65';
 ?column?
--
 x65
(1 row)


> When I do the following in PL/PGSQL 

>   FOR i IN 101..101 LOOP
> charset := charset || '\x' || to_hex(i);
>   END LOOP;


gnari



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] UNICODE and PL/PGSQL

2006-11-27 Thread Ragnar
On mán, 2006-11-27 at 15:54 +0100, Bart Degryse wrote:
> Hi gnari,
> I suppose your statement
> test=# select '\x'||'65';
> is done on some command line interface. I don't have that. I can only
> use some client program. I'm using EMS SQL Manager 2007 and pgAdmin
> III 1.3
> None of them accepts your statement.
> My point is that in my first version of the function (  charset :=
> charset || '\x65';  ) a unicode hex value for the letter e is stored
> in charset.

my point was that '\x65' as a literal
is read as 'e', but '\\x' || '65' is
just the concatation of 2 2-char varchars

> When I try to do the same for a range of hex values (  FOR i IN
> 101..101 LOOPcharset := charset || '\x' || to_hex(i);  )

you might be able to do what you want with
set_byte() and encode().

CREATE OR REPLACE FUNCTION myfunction() 
  RETURNS varchar AS 
$x$ 
  DECLARE x bytea := ' ';
  BEGIN
FOR i in 101..105 LOOP
  x:=set_byte(x,i-101,i);
END LOOP;
RETURN encode(x,''escape'');
  END;
$x$
 LANGUAGE 'plpgsql';

select myfunction(); 


gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] UNICODE and PL/PGSQL

2006-11-28 Thread Ragnar
On þri, 2006-11-28 at 09:14 +0100, Bart Degryse wrote:
> Yes, I tried and it didn't work. PostgreSQL surely makes something of
> it, but not the right characters.

maybe you should show us exacly what you did, and what
you got, and what you expected

> Unless Markus can make his idea using "decode" work, this might be
> something pl/pgsql cannot do.

did you try the function I posted yesterday elsewhere
in this thread?

gnari



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Using Control Flow Functions in a SELECT Statement

2006-12-05 Thread Ragnar
[
removing a bunch of probably uninterested people from CC
]

On mán, 2006-12-04 at 22:12 +0530, Ashish Ahlawat wrote:
> Hi Team 
> 
> Thanks 
> 
> FOR your prompt responseBut USING CASE issue still NOT
> resolvedOracle prompts same error.

this is a postgresql mailing list, but
I believe that even with oracle, the case statement
should work. a quick google found
http://www.oracle-base.com/articles/9i/Case9i.php


gnari



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] join and sort on 'best match'

2006-12-13 Thread Ragnar
On mið, 2006-12-13 at 10:26 +0100, Dirk Griffioen wrote:

> I have been breaking my head on the following problem: how to join 2
> tables and sort the results on the best match.
  ^
> - there are 3 tables, items, tags and items_tags. The items_tags table
> links items to tags.
> - I have one item which has certain tags, and I want to look up all
> the other items that have those tags as well

looks to me like you want to join:
items->item_tags->tags->item_tags->items

so the basic select is:
SELECT *
  FROM  items AS i1
   JOIN items_tags AS it1 ON (it1.item_id = i1.id)
   JOIN tags AS t ON (t.tag_id = it1.tag_id)
   JOIN items_tags AS it2 ON (it2.tag_id = t.tag_id)
   JOIN items AS i2 ON (i2.id = it2.item_id)
  WHERE i1.id=?

> - results should be sorted and presented by 'best match': first all
> the items that have 3 tags in common, then 2 and last 1

this would be: 

SELECT i1.id,i2.id,COUNT(*) as quantity
  FROM  items AS i1
   JOIN items_tags AS it1 ON (it1.item_id = i1.id)
   JOIN tags AS t ON (t.tag_id = it1.tag_id)
   JOIN items_tags AS it2 ON (it2.tag_id = t.tag_id)
   JOIN items AS i2 ON (i2.id = it2.item_id)
  WHERE i1.id=?
  GROUP by i1.id,i2.id
  ORDER BY quantity DESC

> I thought I had found the solution (my test cases worked), but I now
> find cases that should be found by the query but are not.

if this does not work, please provide us with a counter example.

gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] join a lot of columns of two tables

2006-12-14 Thread Ragnar
On fim, 2006-12-14 at 12:01 +0100, ivan marchesini wrote:
> Dear Postgres Users,
> I have 2 tables...
> each one has a column called ID (primary keys of each table)
> the values into each ID column are exactly the same.
> 
> each table has a lot of other columns (around 50 for each table)
> 
> I would like to create a table containing the ID (only one column
> obviously) and all the other columns of both tables...
> 
> a simple equi join for each column is simple but how can I join
> completely the two table on the bases of the column ID???
> 
> probably it is a simple question but I don't know how to solve this very
> simple problem quikly...  :-(
> thanks

select * from t1 join t2 USING (id);

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Need to subtract values between previous and current row

2006-12-15 Thread Ragnar
On fös, 2006-12-15 at 18:27 +0600, Partha Guha Roy wrote:
> Hi,
> 
> I have a table that contains data like this:
> 
> ID   ATIME (MM/dd/)
> ==  
> 110/12/2006
> 210/14/2006
> 310/18/2006
> 410/22/2006 
> 510/30/2006
> 
> Now I need a query that will subtract atime of row 1 from row 2, row2
> from row3  and so on...

if there are no missing IDs you can use a self join
SELECT t1.atime,t2,atime 
FROM t AS t1 
 JOIN t AS t2 ON (t1.ID=t2.ID+1)


otherwise, you could use a loop in a pl/pgpsql function


gnari



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Need to subtract values between previous and current row

2006-12-15 Thread Ragnar
On fös, 2006-12-15 at 19:21 +0600, Partha Guha Roy wrote:

> Thanks for everyones email. Let me clarify a little bit more.
> 
> Actually there is a state change of an entity. now the results are
> stored in the following manner:
> 
> CIDATIME  STATE
> 10112/10/2006 1 
> 10112/12/2006 2
> 10112/14/2006 1
> 10112/17/2006 2
> 10212/14/2006 1
> 10212/16/2006 2
> 10212/18/2006 3
> 
> Now I have to find which CID stayed on which STATE for how long. 
> 
> Lets assume today is the 20th.
> So the result would be:
> 
> CID  DURATION  STATE
> 101  2 1
> 101  2 2
> 101  3 1
> 101  3  2 
> 102  2 1
> 102  2 2
> 102  2 3
> 
> The constraints are:
> I can't use any plpgsql or other function.
> There can be gaps in CID.
> No sequence can be created. 

those are constraints indeed.
what do you mean by 'can't use any plpgsql or other function'?
  no user-defined functions, or no functions at all
can you use temp tables ?
what version of postgres is this?

actually, this looks like a classical client-side problem.

gnari



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Need to subtract values between previous and current row

2006-12-15 Thread Ragnar
On fös, 2006-12-15 at 06:01 -0800, Richard Broersma Jr wrote:
> > CIDATIME  STATE
> > 10112/10/2006 1
> > 10112/12/2006 2
> > 10112/14/2006 1
> > 10112/17/2006 2
> > 10212/14/2006 1
> > 10212/16/2006 2
> > 10212/18/2006 3
> 
> select A.cid, (A.atime - max(B.atime)) duration, A.state
> from table A join table B
> on (A.atime > B.atime and A.cid = B.cid)
> group by A.atime, A.cid, A.state;

not bad, except you miss the initial state for each cid,
and I do not see how you get the final duration.

this inspired me:

test=# create table t (cid int, atime date, state int);
CREATE TABLE
test=# insert into t values (101,'2006-12-10',1);
INSERT 0 1
test=# insert into t values (101,'2006-12-12',2);
INSERT 0 1
test=# insert into t values (101,'2006-12-14',1);
INSERT 0 1
test=# insert into t values (101,'2006-12-17',2);
INSERT 0 1
test=# insert into t values (102,'2006-12-14',1);
INSERT 0 1
test=# insert into t values (102,'2006-12-16',2);
INSERT 0 1
test=# insert into t values (102,'2006-12-18',3);
INSERT 0 1
test=# select A.cid, 
  (min(B.atime)-A.atime) as duration,
  A.state 
   from t as A
join (select * from t
   union all
  select distinct on (cid) cid,
  '2006-12-20'::date,0 from t
 ) as B 
 on (A.atime < B.atime and A.cid = B.cid)
   group by A.atime, A.cid, A.state
   order by a.cid,a.atime;

 cid | duration | state 
-+--+---
 101 |2 | 1
 101 |2 | 2
 101 |3 | 1
 101 |3 | 2
 102 |2 | 1
 102 |2 | 2
 102 |2 | 3
(7 rows)


gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] join and sort on 'best match'

2006-12-15 Thread Ragnar
On fös, 2006-12-15 at 22:10 +0530, Ashish Ahlawat wrote:
>  
>  
> Hi team
>  
> I was just asking that If table ITEM3 has all the columns then why we
> need to have a join ?? even we put a sorting very easily.

> On 12/14/06, Markus Schaber <[EMAIL PROTECTED]> wrote: 
> Hi, Ashish,
> 
> "Ashish Ahlawat" <[EMAIL PROTECTED]> wrote:
> 
> > if table *Item 3 : news, nature, greenpeace, whale has all
> clmn y v need
> > join ??*

it seems these where not columns of a table.

the organisation was like this:

table items:

id   name
---
1foo
2bar
3item3


table tags:

tag_id   text
-
1news
2nature
3tennisballs
4greenpeace
5cannibals
6whale

and table items_tags:

item_id  tag_id

31
32
34
36
...


so table items needs to be joined to table tags via 
table items_tags to retrieve the connections from item3
to news, nature, greepeace and whales.

gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Unions and Grouping

2006-12-15 Thread Ragnar
On fös, 2006-12-15 at 16:17 -0600, Aaron Bono wrote:
> On 12/15/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Aaron Bono" <[EMAIL PROTECTED]> writes:
> > My question, what do the SQL Specifications say should
> happen on a Union?
> > Is it supposed to eliminate duplicates even WITHIN the
> individual queries 
> > that are being unioned?
> 
> Yes.  SQL92 7.10 saith:
> [snip SQL92 standardese]

> 
> So if I don't want the duplicated WITHIN the two queries to be
> eliminated, I use UNION ALL?

with UNION ALL, no duplicates will be discarded, either
"within" or "between" the queries.

if I understand you, you are looking for having only queries
that are both in Q1 and Q2 removed, but not for example, 
any duplicates in Q1 that are not in Q2.

if this is what you want, then I think you may not have 
thought it through. what about a row that is twice in
Q1 and three times in Q2? how many copies of this row
should be in the result?

gnari




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] SELECT with WHERE clause by column number

2006-12-18 Thread Ragnar
On mán, 2006-12-18 at 09:17 -0800, Carlos Santos wrote:

> 
> SELECT id FROM myTable WHERE column-number = 'value';
> (PS: The id column is the primary key of myTable).
> 
> That is a select using column number in the WHERE clause what don't
> exists in SQL.
> 
> I need this because there's a situation in my program where I don't
> have the column name.
> I've solved that querying Postgresql the name of the column with that
> number, and then creating  the SELECT query.
> But this solution is slow... two database calls...
> 
> Can I do this with a single query or in a faster way through SQL, an
> internal function or through a Procedural Language?

this can be done with the more dynamic prodedural languages.

this should be possible too with pl/pgsql using your first query
+ EXECUTE
this is still 2 queries, but only one client round-trip.

this can also be done in one SQL query with some creative
misuse of an UNION ALL within a subquery, if you know the 
number of columns in your table.

gnari





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Fetching BLOBs

2006-12-20 Thread Ragnar
On mið, 2006-12-20 at 08:55 +0530, Ashish Ahlawat wrote:

> I have a very intersting question to all of you. Pls help me to build
> this query, I want to fetch more that 70,000 BLOB from different
> customer servers. the issue is there are some BOLB files with common
> names on all servers. So I want merge these files into a single
> BLOB during  fetching data. I am able to fetch the  BLOB data from all
> cust servers but unfortunatelly it overwrite previous file. 
>  
> So pls provide any simple query format for the same, assuming two
> table tab1 & tab 2

a bit of friendly advice on how to ask on these lists:

I you do not get any useful replies to a question, you
should follow up with more details, more specific questions,
show what you have tried so far, and tell us in what way
that did not work.

It is unlikely that repeating the same question verbatim 
3 times in different threads, CC'd to some random list
users will give better results than the first posting did.

Remember that many of us are not native english speakers,
so describe your problem as clearly and precisely as possible.

For example, it is totaly unclear to me what your actual
problem is.

gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] join/group/count query.

2006-12-20 Thread Ragnar
On mið, 2006-12-20 at 10:12 +, Gary Stainburn wrote:
> Hi folks.  I have the following query which works for me at the moment.  
> However, o_model refers to a table stock_models which has one record for each 
> model type. At the moment if I another record to the stock_models I have to 
> amend the select.  Is it possible to make this automatic by joining the 
> stock_models data somehow?
> 
> select o_p_id as key, o_p_id as p_id, p_name, count(o_p_id) as total,
>count (case when o_model = 5 then 1 else NULL end) as KA,
>count (case when o_model = 10 then 1 else NULL end) as Focus,
>count (case when o_model = 13 then 1 else NULL end) as C_Max,
>count (case when o_model = 16 then 1 else NULL end) as S_Max,
>count (case when o_model = 20 then 1 else NULL end) as Fiesta,
>count (case when o_model = 25 then 1 else NULL end) as Fusion,
>count (case when o_model = 30 then 1 else NULL end) as Mondeo,
>count (case when o_model = 35 then 1 else NULL end) as Galaxy,
>count (case when o_model = 40 then 1 else NULL end) as Ranger,
>count (case when o_model = 50 then 1 else NULL end) as Connect,
>count (case when o_model = 60 then 1 else NULL end) as Transit,
>count (case when o_model = 70 then 1 else NULL end) as Fiesta_Van
>from order_details 
>where date_trunc('month',CURRENT_DATE) = date_trunc('month',o_ord_date)
>group by o_p_id, p_name;

if I understand correctly, you want one column in your output, 
for each row in the table table stock_models

you can do this with the crosstabN function in the contrib 
module 'tablefunc', or by making your own procedural language
function.

gnari




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Changing character set when the damage is done

2006-12-22 Thread Ragnar
On lau, 2006-12-23 at 00:12 +0100, Alexis Paul Bertolini wrote:

> I set up a DB with default values and it now uses the SQL_ASCII 
> character set (as per "show client_encoding;").
> 
> I have copied in quite a lot of data from various Access databases but 
> only now have I realized that all accented vowels show up differently 
> from what I expected (I dare not cite some examples a they would 
> doubtlessly show up wrong in the mail).
> 
> The contents are text written in Italian, so it's just the 5 vowels with 
> an acute or grave accent. The user base is all Italian, so a quick and 
> dirty trick is perfectly viable.
> 
> Any chance of running some smart UPDATE queries to put things right? Or 
> elaborating on the fields whilst performing a SELECT query?

depends on whether all imports have been in the same encoding 
or not. SQL_ASCII basically accepts and stores the characters
without interpretation, so if all imorts were done with one
client_encoding, you should be able to read the values again
using the same encoding.

are all the instances of these 5 characters the same everywhere?
in that case there is a good chance that you can fix the values
with UPDATES

when you say 
'all accented vowels show up differently from what I expected'
do you mean in psql or some other tool?
do these show as 1 or 2 characters?

gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] How to reduce a database

2006-12-29 Thread Ragnar
On fös, 2006-12-29 at 07:09 -0800, Mario Behring wrote:

> Anyway, the openNMS database is very large now, more than 25GB
> (considering all tables) and I am starting to have disk space issues.
> The openNMS product has a vacuumdb procedure that runs every 24 hours
> and reads a vacuumd-configuration.xml file for parameters on what to
> do. 

> The problem is that this process is not reducing the database size.
> What I need to do is to delete some records based on timestamp fileds
> or something like that. I don't know how to do it though.

before you start deleting random rows in a database you are not
too familiar with, let us start by trying to determine your
actual problem.

you should issue a manual VACUUM VERBOSE, and look at the
output of that first. possibly you will find that the database 
is suffering from bloat due to too small fsm settings.

if that is the case, a VACUUM FULL might be indicated, but
note that it will take exclusive locks, so you should
schedule that for some time where you can afford downtime.

if this is not enough, then you can take a look at deleting
rows, but you would probably get better advice on that
from the OpenNMS community. I assume they have
mailinglists or forums. 


> 
>  DELETE FROM node
> WHERE node.nodeType = 'D'; 
>  DELETE FROM
> ipInterface WHERE ipInterface.isManaged = 'D'; 
>  DELETE FROM if
> Services WHERE ifServices.status = 'D'; 
>  DELETE FROM even
> ts WHERE NOT EXISTS (SELECT svclosteventid FROM outages WHERE
> svclosteventid = events.eventid UNION SELECT svcregainedeventid FROM
> out  ages WHERE svcregainedeventid = events.eventid UNION SELECT
> eventid FROM notifications WHERE eventid = events.eventid) AND
> eventtime &  lt; now() - interval '6 weeks'; 
> 

these all seem to be deletes (no VACUUMs), so you might want
to check if vacuumd is actually running to be 100% sure.

also, what version postgres is this ?

gnari

> 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] hi

2007-04-24 Thread Ragnar
On þri, 2007-04-24 at 17:39 +0530, Penchalaiah P. wrote:
> Hi 
> 
> I have the data like this in temp table
> 
> SQL> Select sno, value from temp;

you seem to be assuming a specific order for this.

gnari




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] How to find missing values across multiple OUTER JOINs

2007-06-06 Thread Ragnar
On þri, 2007-06-05 at 23:55 -0700, Drew wrote:
> I'm having troubles using multiple OUTER JOINs, which I think I want  
> to use to solve my problem.
> 
> My problem is to find all non-translated string values in our  
> translations database, given the 4 following tables:
> 
> SOURCE (source_id PRIMARY KEY, language_id, value, ...)
> TRANSLATION (translation_id PRIMARY KEY, language_id, value, ...)
> TRANSLATION_PAIR (source_id, translation_id)
> LANGUAGE(language_id PRIMARY KEY, name)
> 
> This seems to me the appropriate situation for using OUTER JOINs, but  
> I cannot figure out how to get the null rows without the not-null rows.
> 
> Here's my best guess at this query:
> SELECT  
> s.source_id,tp.translation_pair_id,t.translation_id,t.language_id,  
> l.name   
>  
>  FROM source s
> LEFT OUTER JOIN translation_pair tp USING(source_id)
> LEFT OUTER JOIN translation t ON tp.translation_id = t.translation_id  
> AND t.translation_id is null
move this condition out of the ON clause into a WHERE clause
> )
> RIGHT OUTER JOIN language l on l.language_id = t.language_id;


SELECT s.source_id,
   tp.translation_pair_id,
   t.translation_id,
   t.language_id,
   l.name
FROM source s
 LEFT OUTER JOIN translation_pair tp USING(source_id)
 LEFT OUTER JOIN translation t ON tp.translation_id = t.translation_id
 RIGHT OUTER JOIN language l 
 on l.language_id =t.language_id
WHERE t.translation_id is null;


(i did not check the rest of your query)

hope this helps,
gnari



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] join problem

2007-06-21 Thread Ragnar
On fim, 2007-06-21 at 08:46 -0500, A. R. Van Hook wrote:
> if I query for the total deposit using
>select sum(deposit)
>from invoice
>where cusid = 2128"
> 
> I also get 1179.24, also the correct amount
> 
> 
> If I try an inclusive query using the following:
>  select
>   sum(i.rowtot + i.tax) as tot,
>   sum(v.deposit) as deposit
> from cai c
> join invoice   v on (v.cusid = c.cusid)
> left join invoiceitems i on (v.ivid = i.ivid)
> where v.cusid = 2128
> group by 
>   c.cusid
> I get
> tot| deposit
> --+-
>  1179.240 | 2819.24

you are adding the invoice deposit once for each item

gnari



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] join problem

2007-06-23 Thread Ragnar
On lau, 2007-06-23 at 04:15 -0500, A. R. Van Hook wrote:
> Ragnar wrote:
> > On fim, 2007-06-21 at 08:46 -0500, A. R. Van Hook wrote:
> >> If I try an inclusive query using the following:
> >>  select
> >>   sum(i.rowtot + i.tax) as tot,
> >>   sum(v.deposit) as deposit
> >> from cai c
> >> join invoice   v on (v.cusid = c.cusid)
> >> left join invoiceitems i on (v.ivid = i.ivid)
> >> where v.cusid = 2128
> >> group by 
> >>   c.cusid
> >> I get
> >> tot| deposit
> >> --+-
> >>  1179.240 | 2819.24
> >
> > you are adding the invoice deposit once for each item
> >
> What is the correct query???

sum each invoice separately, and then group the sums by cusid.

for example:

select vcusid as cusid,
   sum(vtot) as tot,
   sum(vdeposit) as deposit
from (
   select
  v.cusid as vcusid,
  v.ivid as vivid,
  sum(i.rowtot + i.tax) as vtot,
  sum(v.deposit)/count(*) as vdeposit
   from invoice as v 
left join invoiceitems as i  on (v.ivid = i.ivid)
   group by v.cusid,
v.ivid
 ) as vsums
where vsums.vcusid=2128
group by vsums.vcusid


hope this helps
gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] join problem

2007-06-24 Thread Ragnar
On sun, 2007-06-24 at 06:14 -0500, A. R. Van Hook wrote:

[ in the future, please avoid top-posting, as it is annoying 
to have to rearrange lines when replying ]

> Ragnar wrote:
> > On lau, 2007-06-23 at 04:15 -0500, A. R. Van Hook wrote:
> >> Ragnar wrote:
> >>> On fim, 2007-06-21 at 08:46 -0500, A. R. Van Hook wrote:
> >>>   
> >>>> [problem involving a join, with sum on base table column]

> >>> you are adding the invoice deposit once for each item
> >>>   
> >> What is the correct query???
> >
> > sum each invoice separately, and then group the sums by cusid.
> >
> > for example:
> >   ...
> >   sum(v.deposit)/count(*) as vdeposit
> >from invoice as v 
> > left join invoiceitems as i  on (v.ivid = i.ivid)
> >group by v.cusid,
> > v.ivid
> >   ...

> Works great.
> Can you enlighten me as why the deposit is divided by the number of rows?

maybe an example would clarify a bit:

say you have only one invoice (deposit=100), 
with 2 invoiceitems (both with rowtot=50)

"select sum(deposit) from invoice" returns 100
"select sum(rowtot) from invoiceitems" returns 100

the query:
  select ivid,deposit,rowtot 
 from invoice 
 left join invoiceitems 
  on (invoice.ivid = invoiceitems.ivid)
returns the rows:
ivid | deposit | rowtot
1| 100 | 50
1| 100 | 50

if you just select a SUM(deposit) on that join,
you get 200, not 100 because the value is repeated
for each invoiceitem.

hope this helps

gnari



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] yet another simple SQL question

2007-06-25 Thread Ragnar
On mán, 2007-06-25 at 12:44 -0500, Joshua wrote:
> I have a column that looks like this
> 
> firstname
> -
> John B
> Mark A
> Jennifer D
> 
> Basically I have the first name followed by a middle initial. Is there a 
> quick command I can run to strip the middle initial?

how about:

  select regexp_replace(firstname,' .*','') as firstname 
  from footable;

or:

  select substring(firstname FROM '(.*) ') as firstname 
  from footable;


gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Using case or if to return multiple rows

2007-07-12 Thread Ragnar
On fim, 2007-07-12 at 12:15 +0530, Ashish Karalkar wrote:
> I want to select data from two diffrent table based on third tables
> column
> somthing like:
>  
>  
> select case when  t3.date='' then
> select * from table1
> else 
> select * from table 2
>  from table3  t3 where t3.date='x'
>  
> Problem is that I have to do it in Plain SQL.

you problem is not quite clear.
do you want to output all rows from table1 (or table2)
for each row of table3? 
or does table2 only have one row with date='x' ?
is '' the same date as 'x' in your example?
if so, how can table2's columns be selected.

are you looking for something like:

select * from table1 
  where (select date from table3)=''
UNION ALL
select * from table2 
  where (select date from table3)<>''

this might work if table1 and table2 have compatible schemas, and table3
only has 1 row, and table3.date is NOT NULL

if this is not what you are looking for, you need to be more clear.

gnari






---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] data dependent sequences?

2007-07-15 Thread Ragnar
On sun, 2007-07-15 at 09:28 -0600, Stuart McGraw wrote:
> Advice requested :-)  I have a table like:
> 
> CREATE TABLE items (
> id INT,
> typ INT...
> PRIMAY KEY (seq,typ));
> 
> I would like 'id' to be like a SERIAL except that I
> want independent sequences for each value of 'typ'.

what possible advantage could there be to that?

if you need gapless series, then sequences (and serial)
are not adequate anyways.

just use one sequence.

gnari



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Range of caracters

2007-08-07 Thread Ragnar
On þri, 2007-08-07 at 10:07 -0700, Wilton Wonrath wrote:

> Data Table
> 
> Inicial  -  Final
> 9C2HB02107R008000 9C2HB02107R008200
> 
> FAIXA1FAIXA100
> 
> I´m doing the following query:
> 
> SELECT recallfxchassi.* FROM recallfxchassi 
> LEFT JOIN recall ON recallfxchassi.idrecall = recall.id WHERE 
> ('FAIXA2' BETWEEN chassiinicial AND chassifinal);
> 
> This is not returning anything. I believe that is because it´s check´s
> caracter by caracter and 2 is bigger than 1. But in a real situation
> I will need that return something (FAIXA2 is between FAIXA1 and
> FAIXA100).
> 
> Does anyone knows if exists some way or command in PostgreSQL that
> could solve this "problem" ?

sounds like you need to normalize your schema, but it is possible.

if the prefix is fixed then you can do 
WHERE substr('FAIXA2',6)::integer 
  between substr(chassiinicial,6)::integer 
  and substr(chassifinal,6)::integer;


if the prefix is not fixed, but it's length is, and you are using a
sufficiently recent version of postgresql, you can do:
WHERE (substr('FAIXA2',1,5),
   substr('FAIXA2',6)::integer) 
   between
   (substr(chassiinicial,1,5),
substr(chassiinicial,6)::integer) 
   and (substr(chassifinal,1,5),
substr(chassifinal,6)::integer);

if the prefix length is not fixed, you will have
to do some juggling:
WHERE (substring('FAIXA2' from '^[^0-9]*'),
   substring('FAIXA2' from '[0-9]+$')::integer) 
   between
   (substr(chassiinicial from '^[^0-9]*'),
substr(chassiinicial from '[0-9]+$')::integer) 
   and (substr(chassifinal from '^[^0-9]*'),
substr(chassifinal from '[0-9]+$')::integer);


gnari



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Using function like where clause

2007-08-07 Thread Ragnar
On mán, 2007-08-06 at 16:44 -0300, Ranieri Mazili wrote:
> 1) Can I use a function that will return a string in a where clause like 
> bellow?
> 
> select *
> from table
> where my_function_making_where()
>   and another_field = 'another_think'

you could have your function return a boolean instead of a string


> 2) Can I use a function that will return a string to return the list of 
> columns that I want to show like below?
> 
> select my_function_making_list_of_columns()
> from table
> where field_test = 'mydatum'

no

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Returning array of IDs as a sub-query with group-by

2007-08-25 Thread Ragnar
On lau, 2007-08-25 at 17:55 +0200, Andreas Joseph Krogh wrote:
> On Saturday 25 August 2007 17:10:57 Andreas Joseph Krogh wrote:

[snip]

> >  count | item_id | price | item_id_array
> > ---+-+---+---
> >  3 |   1 |   100 | {1,2,3}
> >  6 |   1 |   200 | {4,5,6,7,8,9}
> >  2 |   2 |   200 | {10,11}
> >
> > I tried this query which complains about an ungruoped column:
> >
> > SELECT COUNT(il.price), i.id AS item_id, il.price,
> >  ARRAY(SELECT a.id FROM item_log a WHERE a.id = il.id) AS item_id_array
> >   FROM item i, item_log il WHERE i.id = il.item_id GROUP BY il.price, i.id;
> >
> > ERROR:  subquery uses ungrouped column "il.id" from outer query
> >
> > Any hints?
> 
> I found the following CREATE AGGREGATE suggestion in the PG-docs:

[aggregate solution snipped]

> If someone knows of a way without introducing a new AGGREGATE I'm still 
> interrested.

you can allways do the ARRAY(SELECT...) outside the grouping:
# select *,(select ARRAY(
 SELECT a.id 
 FROM item_log as a 
 WHERE foo.item_id=a.item_id
   AND foo.price=a.price
 )
   ) AS item_id_array 
from (
   select count(*),item_id, price 
   from item_log 
   group by item_id, price
 ) as foo;

 count | item_id | price | item_id_array 
---+-+---+---
 3 |   1 |   100 | {1,2,3}
 6 |   1 |   200 | {4,5,6,7,8,9}
 2 |   2 |   200 | {10,11}
(3 rows)


but i suspect the aggregate will perform better

gnari



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Problem in dynamic query execution in plpgsql

2008-07-12 Thread Ragnar

On lau, 2008-07-12 at 14:45 +0530, Anoop G wrote:
> Hai Kretschmer ,
>Thanks for your reply, it help to solve my problem. I have few for
> doubts regarding dynamic query
...
> vchr_query:= 'SELECT mf,sf,(mf – mf * comm /100) – (sf – sf *
> comm/100) as flt_claim';
the '–' characters here probably are not what you think.
try with '-'

...
> but when I am traing to run this function I got the following error
> 
> ERROR:  syntax error at or near "–" at character 18
> QUERY:  SELECT mf,sf,(mf – mf * comm /100) – (sf – sf * comm/100) as
> flt_claim

gnari




-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to creat tables using record ID in for loop

2008-08-06 Thread Ragnar

On mið, 2008-08-06 at 18:52 +, CHUNRIMACHUNRIMA wrote:
> "96784002";"mylocation #1"
> "02385067";"mylocation #2"
> "01734056";"mylocation #3"
...
> 3. What I want to do is to create tables with staid from mytest table
> using for loop.
...
> +++Example+++
> 
> CREATE TABLE s06784000 (
> staid   varchar(50),
> valreal,  -- streamflow
> datedate
> )
> 
> WITHOUT OIDS;

what about a query that generates texts like 
  'CREATE TABLE s06784000 .;'
for each row of your table? then you can either
feed the output to psql, or just EXECUTE them
in a PL/pgSQL function.

> 
> 4. I must create tables using for loop because I have millions of
> records. 

you want to create millions of tables? you should do a bit of
performance testing before you commit to this, as I
imagine that you might get into scalability problems.

gnari



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] DELETE with JOIN

2008-08-07 Thread Ragnar

On fim, 2008-08-07 at 09:14 -0700, [EMAIL PROTECTED] wrote:
> I want to delete with a join condition.  Google shows this is a common
> problem, but the only solutions are either for MySQL or they don't
> work in my situation because there are too many rows selected.  I also
> have to make this work on several databases, includeing, grrr, Oracle,
> so non-standard MySQL "solutions" are doubly aggravating.
> 
> DELETE FROM a WHERE a.b_id = b.id AND b.second_id = ?
> 
> I have tried to do this before and always found a way, usually
> 
> DELETE FROM a WHERE a.b_id IN (SELECT id FROM b WHERE second_id = ?)

did you look at DELETE FROM table1 USING table2 WHERE ... ?

gnari



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-25 Thread Ragnar Hafstað
On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:

> The best way to do pages for is not to use offset or cursors but to use an
> index. This only works if you can enumerate all the sort orders the
> application might be using and can have an index on each of them.
> 
> To do this the query would look something like:
> 
> SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
> 
> Then you take note of the last value used on a given page and if the user
> selects "next" you pass that as the starting point for the next page.

this will only work unchanged if the index is unique. imagine , for
example if you have more than 50 rows with the same value of col.

one way to fix this is to use ORDER BY col,oid

gnari



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-25 Thread Ragnar Hafstað
On Thu, 2005-01-20 at 19:12 +, Ragnar Hafstað wrote:
> On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
> 
> > The best way to do pages for is not to use offset or cursors but to use an
> > index. This only works if you can enumerate all the sort orders the
> > application might be using and can have an index on each of them.
> > 
> > To do this the query would look something like:
> > 
> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
> > 
> > Then you take note of the last value used on a given page and if the user
> > selects "next" you pass that as the starting point for the next page.
> 
> this will only work unchanged if the index is unique. imagine , for
> example if you have more than 50 rows with the same value of col.
> 
> one way to fix this is to use ORDER BY col,oid

and a slightly more complex WHERE clause as well, of course

gnari



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Making NULL entries appear first when ORDER BY

2005-02-15 Thread Ragnar Hafstað
On Wed, 2005-02-16 at 00:55 +, Andreas Joseph Krogh wrote:

> SELECT start_date, start_time, end_time, title
> FROM onp_crm_activity_log
> WHERE start_date IS NOT NULL
> ORDER BY start_date ASC, start_time ASC;
> 
>  start_date  | start_time  |  end_time   |  title
> -+-+-+---
>  2005-02-03 00:00:00 | 2005-02-03 08:00:00 | | Something
>  2005-02-03 00:00:00 | 2005-02-03 09:00:00 | 2005-02-03 12:00:00 | Something
>  2005-02-03 00:00:00 | | | Something
> 
> 
> Now, as you see, touples with NULL in the "start_time"-field appear "after" 
> the others. I would like to make all entries where start_time IS NULL apear 
> *before* all the others. Any idea how to achieve this?

how about ORDER BY start_date , 
  COALESCE(start_time,'0001-01-01 00:00:00'::timestamp)
?

gnari



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] pg primary key bug?

2005-02-22 Thread Ragnar Hafstað
On Tue, 2005-02-22 at 10:33 +0100, pginfo wrote:
> 
>   
> We are using  jdbc (jdbc driver from pg)  + jboss (java based
> application server) + connection pool (biult in jboss).
> ...
> Will vacuum full generate this problem if we have locked table in this
> time? (It is possible to have locked table in theory)

I do not know if this is relevant, but I have seen jboss applications
keep sessions in 'Idle in transaction' state, apparently with some
locks granted. Would such cases not interfere with vacuum?

gnari



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] schemas and paths with the alter statement

2005-02-22 Thread Ragnar Hafstað
On Tue, 2005-02-22 at 11:06 -0800, Theodore Petrosky wrote:
> I have to first admit that I am very green at this. I
> thought that one could refer to a table in a fully
> qualified path... public.testtable
> ...
> ALTER TABLE public.test ADD CONSTRAINT public.test_PK
> PRIMARY KEY (test);
> ...
> aswebtest=# ALTER TABLE public.test ADD CONSTRAINT
> public.test_PK PRIMARY KEY (test);
> ERROR:  syntax error at or near "." at character 46

it not the first public that is the problem, but the
second one, as you might discover if you count 46
characters.

you cannot put the constraint in a different schema than
the table, it seems.

gnari





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Read count ?

2005-02-26 Thread Ragnar Hafstað
On Thu, 2005-02-24 at 17:17 +0200, Aarni Ruuhimäki wrote:
> Hi,
> 
> Could someone please give a hint on how to query the following neatly ?
> 
> Get news from a news table that belong to a particular account, get segment 
> name from segments table for each news item and read count from read history 
> table that gets a news_id and timestamp insert every time the news is read. 
> Display everything by news count, most read news first ?
> 
> news_id 4, news_header, segment_name x, read 10 times
> news_id 2, news_header, segment_name y, read 8 times
> news_id 1, news_header, segment_name x, read 7 times
> news_id 3, news_header, segment_name x, read 0 times
> 
> news_table:
> news_id, account_id, segment_id, news_header, ...
> 
> segments_table:
> segment_id, account_id, segment_name
> 
> read_history_table:
> history_id, news_id, timestamp
> 

how about:

select news_id,news_header,segment_name,count(*) 
from news_table 
 natural join segments_table 
 natural join read_history_table
where account_id=? 
group by news_id,news_header,segment_name;

?

gnari



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Serial and Index

2005-02-27 Thread Ragnar Hafstað
On Sun, 2005-02-27 at 12:54 +, Sam Adams wrote:
> I read at http://jamesthornton.com/postgres/FAQ/faq-english.html#4.15.1
> that when a serial is created then an index is created on the column.
> However I can't seem to find this anywhere in the PoistgreSQL manual. Is
> this true? Thanks.

no, this is not true.
howewer defining the serial column as UNIQUE will.

test=# create table q2 (a serial unique,b text);
NOTICE:  CREATE TABLE will create implicit sequence "q2_a_seq" for
"serial" column "q2.a"
NOTICE:  CREATE TABLE / UNIQUE will create implicit index "q2_a_key" for
table "q2"
CREATE TABLE
test=#

feel free to inform the webmaster of that site, that the FAQ has been
updated since 2002.

here is a more authoritative copy:
http://www.postgresql.org/files/documentation/faqs/FAQ.html

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] psql encoding problem

2005-03-02 Thread Ragnar Hafstað
On Mon, 2005-02-28 at 20:48 +, T E Schmitz wrote:

> INSERT INTO item (name,retail_price) VALUES  ('Cheese Soufflé',7.95,);
> 
> (I presume you see the accented character in *Soufflé*)
> 
> psql comes back with "invalid byte sequence for encoding "UNICODE": 0xe9"
> If I do this via DbVisualizer, the record is inserted fine.

might be the client encoding.

test=# CREATE DATABASE unitest with ENCODING='UNICODE';
CREATE DATABASE
test=# \connect unitest
You are now connected to database "unitest".
unitest=# create table a (n text);
CREATE TABLE
unitest=# insert into a values ('Cheese Soufflé is cool');
ERROR:  invalid byte sequence for encoding "UNICODE": 0xe92069
unitest=# set client_encoding='LATIN1';
SET
unitest=# insert into a values ('Cheese Soufflé is cool');
INSERT 7533697 1
unitest=# select * from a;
   n

 Cheese Soufflé is cool
(1 row)

unitest=#


gnari



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] How to cast VARCHAR to BYTEA and vice-versa?

2005-03-14 Thread Ragnar Hafstað
On Tue, 2005-03-08 at 07:31 -0800, Moran.Michael wrote:
> Hello all,
>  
> I have a table with a VARCHAR column that I need to convert to a BYTEA.
>  
> How do I cast VARCHAR to BYTEA?

have you looked at the encode() and decode() functions ?
http://www.postgresql.org/docs/7.4/interactive/functions-string.html

gnari



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Scheme not dropping

2005-03-15 Thread Ragnar Hafstað
On Tue, 2005-03-15 at 18:44 +, Graham Vickrage wrote:
> I am dropping a database with an additional scheme other than public on
> version 7.3.2.
> 
> When I come to recreate the database with the same scheme it gives me
> the error: 
> 
> ERROR: namespace "xxx" already exists

does the scheme exist in template1 (or the database you
are using as template when you create database) ?

gnari



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Query performance problem

2005-03-17 Thread Ragnar Hafstað
On Fri, 2005-03-18 at 10:49 +0530, Kenneth Gonsalves wrote:
> On Thursday 17 Mar 2005 7:35 pm, Richard Huxton wrote:
> 
> > Not necessarily. NOT NULL here helps to ensure you can add values
> > together without the risk of a null result. There are plenty of
> > "amount" columns that should be not-null (total spent, total
> > ordered etc).
> 
> that makes sense - but is it necessary to have a not null constraint 
> when there is a default value?

DEFAULT applies to INSERTs, NOT NULL applies to UPDATEs too.

gnari



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] A SQL Question About distinct, limit, group by, having,

2005-03-31 Thread Ragnar Hafstað
On Thu, 2005-03-31 at 13:14 +0800, Lin Kun Hsin wrote:
> below is the sql schema. i hope it will help. 
> 
> i want the top 3 score students in every class

this has been discussed before. a quick google gives me:
http://archives.postgresql.org/pgsql-sql/2004-04/msg00067.php


gnari



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] select & group by

2005-04-04 Thread Ragnar Hafstað
On Mon, 2005-04-04 at 12:16 +0200, Michael L. Hostbaek wrote:
> I've got a problem selecting some specific data from my table. Imagine
> the following rows:
> 
> part  | mfg   | qty   | price | eta   
> ---
> TEST1  ABC 10  100(No ETA, as item is in stock)
> TEST1  ABC 12  120 04/04
> TEST2  CBA 17  10  05/05
> TEST2  CBA 10  20 (No ETA, as item is in stock)
>
> I'd like my selection to produce the following result:
> 
> part  | mfg   | qty   | qty incoming  | highest price | eta
> -
> TEST1  ABC 10  12  120 04/04
> TEST2  CBA 10  17  20  05/05
> 
> Any clues on how to do this ? I kow the group by part, mfg, max(price) -
> but I do not know how to deal with the splitting up qty and stock qty
> and incoming qty.

use CASE. for example, something like:

select part,mfg,
   sum(CASE WHEN eta is NULL then qty ELSE 0 END) as qty,
   sum(CASE WHEN eta is NULL then 0 ELSE qty END) as "qty incoming",
   max(price) as "highest price",
   min(eta) as eta
group by part,mfg;

gnari



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] select & group by

2005-04-04 Thread Ragnar Hafstað
On Mon, 2005-04-04 at 10:47 +, Ragnar Hafstað wrote:
> On Mon, 2005-04-04 at 12:16 +0200, Michael L. Hostbaek wrote:
> > [problem]
> [slightly broken solution]

I forgot a FROM clause, and you might want to add a
ORDER BY clause, if that is important:

select part,mfg,
   sum(CASE WHEN eta is NULL then qty ELSE 0 END) as qty,
   sum(CASE WHEN eta is NULL then 0 ELSE qty END) as "qty incoming",
   max(price) as "highest price",
   min(eta) as eta
from thetable
group by part,mfg
order by part,mfg;

gnari



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] 9.17.5. Row-wise Comparison

2005-04-07 Thread Ragnar Hafstað
On Thu, 2005-04-07 at 06:44 -0700, TJ O'Donnell wrote:
> it might break in future.
> 
>  >if (b > 1) then true
>  >else if (b = 1 and c > 2) then true
>  >else if (b = 1 and c = 2 and d > 3) then true
>  >else false
> Your spec sql snippet is like an OR, isn't it, instead
> of an AND as I'm reyling on?

not really.

> After PG is to spec, will the behaviour I now see change?

yes

> > "TJ O'Donnell" <[EMAIL PROTECTED]> writes:
> > 
> >>I've been using syntax like
> >>select a from tbl where (b,c,d) > (1,2,3)
> >>   to mean
> >>select a from t where b>1 and b>2 and d>3

if b=2, c=1 and d=1 then the expression
  (b,c,d) > (1,2,3)
currently evaluates to false, but according to spec,
should evaluate to true.

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] getting count for a specific querry

2005-04-08 Thread Ragnar Hafstað
On Fri, 2005-04-08 at 11:07 -0400, Andrew Sullivan wrote:
> On Fri, Apr 08, 2005 at 09:29:13AM -0400, Joel Fradkin wrote:
> > 
> > Is there a fast way to get the count? 
> 
> Not really, no.  You have to perform a count() to get it, which is
> possibly expensive.  One way to do it, though, is to do 
> 
>   SELECT count(*) FROM tablename WHERE condition LIMIT n;
> 
> or something like that.  Assuming the condition is reasonably limited
> (i.e. it's not going to cost you a fortune to run this), you'll get
> the right number back if the number is < n or else you'll get
> n.

come again ?

test=# select count(*) from a;
 count
---
 3
(1 row)

test=# select count(*) from a limit 2;
 count
---
 3
(1 row)

the LIMIT clause limits the number of rows returned by the select,
in this case 1 row.

maybe you mean something like:

test=# select count(*) from (select * from a limit 2) as foo;
 count
---
 2
(1 row)

gnari



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] getting count for a specific querry

2005-04-08 Thread Ragnar Hafstað
On Fri, 2005-04-08 at 09:29 -0400, Joel Fradkin wrote:

> Our app currently pulls a bunch of data to several query pages.
> 
> My idea is to use the limit and offset to return just the first 50
> records, if they hit next I can set the offset.
> 
> My understanding was this gets slower as you move further into the
> data, but we have several options to modify the search, and I do not
> believe our clients will page very far intro a dataset.

you might reduce the performance loss if your dataset is ordered by
a UNIQUE index.

select * from mytable where somecondition 
  ORDER by uniquecol limit 50;

and next:

select * from mytable where somecondition AND uniquecol>? 
  ORDER by uniquecol limit 50 OFFSET 50;

where the ? is placeholder for last value returned by last query.

if your unique index is a multi-column one, the method is slightly
more complicated, but the same idea.

gnari




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] getting count for a specific querry

2005-04-08 Thread Ragnar Hafstað
On Fri, 2005-04-08 at 12:32 -0400, Tom Lane wrote:
> Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[EMAIL PROTECTED]> writes:
> > you might reduce the performance loss if your dataset is ordered by
> > a UNIQUE index.
> 
> > select * from mytable where somecondition 
> >   ORDER by uniquecol limit 50;
> 
> > and next:
> 
> > select * from mytable where somecondition AND uniquecol>? 
> >   ORDER by uniquecol limit 50 OFFSET 50;
> 
> > where the ? is placeholder for last value returned by last query.
> 
> Uh, you don't want the OFFSET there do you? 

ooops! of course not. the uniquecol>? is meant to REPLACE
the OFFSET.

gnari



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] subselect query time and loops problem

2005-04-10 Thread Ragnar Hafstað
On Sun, 2005-04-10 at 07:54 -0700, pankaj naug wrote:
> [quoting Tom] 
> >Evidently one has been analyzed much more recently than the other,
> because the estimated row counts are wildly different.
>  
> Both the explain/analyse queries has been run at the same time.

in that case, is the data the same?
if so, what about STATISTICS settings for relevant columns?

just to make things clear, have both databases have been
ANALYZEd or VACUUM ANALYZEd recently ? (in case your
'explain/analyse' only refers to a EXPLAIN ANALYZE)

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] can someone jelp me on this?

2005-05-01 Thread Ragnar Hafstað
On Tue, 2005-04-26 at 20:18 +, Lord Knight of the Black Rose wrote:
> hey guys I have a question that I couldnt maneged to solve for the last 4 
> days. Im kinda new to these stuff so dont have fun with me if it was so 
> easy. Ok now heres the question.
> 
> [snip class assignment]

we'd all love to help you to do your assignment.

how far have you got?
what is your main stumbling-block ?

show us what you have done so far, and point out
what the problems with it are

gnari



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Trimming the cost of ORDER BY in a simple query

2005-05-03 Thread Ragnar Hafstað
On Mon, 2005-05-02 at 21:35 -0700, [EMAIL PROTECTED] wrote:

> Query (shows the last 7 dates):
> 
> => SELECT DISTINCT date_part('year',  uu.add_date),  date_part('month',
> uu.add_date),  date_part('day',   uu.add_date)  FROM user_url uu  WHERE
> uu.user_id=1 ORDER BY  date_part('year',  uu.add_date) DESC, 
> date_part('month', uu.add_date) DESC,  date_part('day',   uu.add_date)
> DESC  LIMIT 7;
> 

i assume add_date is a timestamp, because otherwise you could
ORDER by add_date, and use an index on (desc, add_date):
SELECT DISTINCT  WHERE uu.user_id=1 
ORDER BY uu.user_id DESC ,uu.add_date DESC LIMIT 7;

this will not work for a timestamp add_date

> QUERY PLAN:
...
>  Total runtime: 20.313 ms

20 ms does not sound like a big problem.

if 20 ms is really too much for you, and add_date IS a timestamp,
you might think about adding a date column , maintained
by triggers of by your application, and add an index on (user_id, x)

another possibility (total guess) is a functional index
create index func_id_date on user_url(user_id,(add_date::date));
SELECT ... ORDER BY uu.user_id DESC ,uu.add_date::date DESC LIMIT 7;
I have no idea if this will work, or if the planner will use such
an index.

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Replacing a table with constraints

2005-05-13 Thread Ragnar Hafstað
On Fri, 2005-05-13 at 14:38 -0400, Mark Fenbers wrote:
> I have a table called Counties which partially contains a lot bad
> data.  By" bad data", I mean some records are missing; some exist and
> shouldn't; and some records have fields with erroneous information.
> However, the majority of the data in the table is accurate.  I have
> built/loaded a new table called newCounties with the same structure as
> Counties, but contains no bad data.  My was  to completely replace the
> contents of Counties with the contents of newCounties.  The problem
> is: several other tables have Foreign Key constraints placed on
> Counties.  Therefore, Pg will not let me 'DELETE FROM Counties;", nor
> will it let me "DROP TABLE Counties;"  
> 
> I'm perplexed.  Can someone suggest how I can best get data from
> Counties to look just like newCounties?

assuming same primary key on these 2 tables, you have to
consider these cases:

a) records in Counties not in newCounties
b) records in newCounties existing in Counties, but different
c) records in newCounties also correct in Counties
d) records in newCounties missing from Counties

a) get rid of extra records:
   delete from Counties 
  where primarykey not in 
 (select primarykey from newCounties);
   if you hit foreign key constrains, you need to look
   at your data a bit more, anyways 
 
b) update erroneous records:
   update Counties from newCounties set 
col1= newCounties.col1,
col2= newCounties.col2,
col3= newCounties.col3,
...
   from newCounties
   where primarykey=newCounties.primarykey
and
(Counties.col1 <> newCounties.col1
  OR Counties.col2 <> newCounties.col2
  OR Counties.col3 <> newCounties.col3
  ... );

c) nothing to do here

d) insert into Counties 
   select * from newCounties
   where primarykey not in
  (select primarykey from Counties);



test case:

test=# create table c (p int, col1 int, col2 int);
CREATE TABLE
test=# insert into c values (1,1,1); -- correct values
INSERT 7693959 1
test=# insert into c values (2,1,2); -- incorrect
INSERT 7693960 1
test=# insert into c values (9,9,9); -- extra value
INSERT 7693961 1
test=# create table newc (p int, col1 int, col2 int);
CREATE TABLE
test=# insert into newc values (1,1,1); -- correct values
INSERT 7693964 1
test=# insert into newc values (2,2,2); -- incorrect in c
INSERT 7693965 1
test=# insert into newc values (3,3,3); -- missing in c
INSERT 7693966 1
test=# delete from c where p not in (select p from newc);
DELETE 1
test=# update c set col1=n.col1, col2=n.col2
test-#   from newc as n
test-#   where c.p=n.p and (c.col1<>n.col1 or c.col2<>n.col2);
UPDATE 1
test=# insert into c select * from newc where p not in (select p from
c);
INSERT 7693967 1
test=# select * from c;
 p | col1 | col2
---+--+--
 1 |1 |1
 2 |2 |2
 3 |3 |3
(3 rows)


note: 
b) and c) can be merged and simplified if you don't mind updating
unchanged records:
   update Counties from newCounties set 
col1= newCounties.col1,
col2= newCounties.col2,
col3= newCounties.col3,
...
   from newCounties
   where primarykey=newCounties.primarykey;



gnari



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] ORDER BY handling mixed integer and varchar values

2005-05-16 Thread Ragnar Hafstað
On Mon, 2005-05-16 at 11:47 -0700, David B wrote:

(sorting text columns numerically)
 
> And of course I get stuff ordered as I want it.
> BUT… with many product categories being numeric based they come out in
> wrong order   '10 comes before 2" etc.
> 
> So I tried
> Select product_desc, product_price, product_cat
> Order by cast(product_cat as integer), product_price
> 
> And that worked for the numberic based categories.
> 
> I don't know of a query will be across alpha or numeric categories.
> Is there any elegent query you folks can think of that combines the
> two so I can one query that has alpha sorting on alpha categories and
> numeric sorting on numeric values that are in the same column??

select product_desc, product_price, product_cat
  order by cast(product_cat as integer), product_cat, product_price

gnari




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Does Postgresql have a similar pseudo-column "ROWNUM" as

2005-05-17 Thread Ragnar Hafstað
On Tue, 2005-05-17 at 14:48 -0300, Alain wrote:
> 
> Andrew Sullivan escreveu:
> > On Thu, May 12, 2005 at 01:07:00PM -0600, [EMAIL PROTECTED] wrote:
> > 
> >>Does Postgresql have a similar pseudo-column "ROWNUM" as Oracle? If
> >>so, we can write the following query:
> > 
> > 
> > No.  What is the purpose of your query?  You could use ORDER BY and
> > LIMIT..OFFSET to do what you want. I think.
> 
> The problem is probably speed. I have done a lot of tests, and when 
> OFFSET gets to a few thousands on a multimega-recs database, it gets 
> very very slow... 

is there not a similar loss of speed using ROWNUM on oracle?

> ... Is there any other to work around that?

if you are ordering by a unique key, you can use the key value
in a WHERE clause.

select ... where ukey>? order by ukey limit 100 offset 100;

(the ? is placeholder for the last value of ukey returned
from previous select)

gnari
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Does Postgresql have a similar pseudo-column "ROWNUM" as

2005-05-17 Thread Ragnar Hafstað
On Tue, 2005-05-17 at 15:43 -0300, Alain wrote:
> [how to solve the get next 100 records problem]

I am assuming this is for a web like interface, in other words that
cursors are not applicable
 
> > [me] 
> > if you are ordering by a unique key, you can use the key value
> > in a WHERE clause.
> > 
> > select ... where ukey>? order by ukey limit 100 offset 100;
> > 
> > (the ? is placeholder for the last value of ukey returned
> > from previous select)
> 
> I tried that. It does not work in the generic case: 6 MegaRec, telephone 
> listing, alphabetical order. The problem is that somewhere there is a 
> single user with too many entries (over 1000). I even tried to filter 
> the repetitions, but somewhere I get stuck if one guy has too mny 
> entries (one for each phone number).
> 
> I tried using both the name and the primary key (with a combined index), 
> to get faster to the record I want, but I was not sucessfull in building 
> a where clause.

lets say pkey is your primary key and skey is your sort key, and 
there exists an index on (skey,pkey)

your first select is
select ... from tab ORDER by skey,pkey LIMIT 100;

your subsequent selects are
select ... from tab WHERE skey>skey_last 
   OR (skey=skey_last AND pkey>pkey_last) 
ORDER BY skey,pkey
LIMIT 100 OFFSET 100;


> I would appreciate any help, in fact this is my primary reason for 
> joining this list ;-)

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Does Postgresql have a similar pseudo-column "ROWNUM" as

2005-05-17 Thread Ragnar Hafstað
On Wed, 2005-05-18 at 00:13 +0200, PFC wrote:
> > your subsequent selects are
> > select ... from tab WHERE skey>skey_last
> >OR (skey=skey_last AND pkey>pkey_last)
> > ORDER BY skey,pkey
> > LIMIT 100 OFFSET 100;
> 
>   why offset ?
>   you should be able to use the skey, pkey values of the last row on the  
> page to show the next page, no need for offset then.

of course you are right. the WHERE clause is supposed to replace
the OFFSET. too much cut and pasting without thinking and testing.

gnari



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Does Postgresql have a similar pseudo-column "ROWNUM" as

2005-05-18 Thread Ragnar Hafstað
On Tue, 2005-05-17 at 23:16 -0300, Alain wrote:
> 
> Ragnar Hafstað escreveu:
> >>[how to solve the get next 100 records problem]

> BUT, I think that this is close to a final solution, I made some 
> preliminary test ok. Please tell me what you think about this.
> 
> Fisrt let's state that I am reading records to put on a screen (in a 
> Table/Grid). I separated the problem is *3* parts
> 
> -first select is as above:
> select ... from tab ORDER by skey,pkey LIMIT 100;
> 
> -second method for next 100:
> select ... from tab WHERE skey>=skey_last
>  ORDER BY skey,pkey
>  LIMIT 100;
> but here I test for repetitions using pkey and discard them
> 
> -now if I get all repetitions or the last 100 have the same skey with 
> the second method, I use
> select ... from tab WHERE skey=skey_last AND pkey>pkey_last
>  ORDER BY skey,pkey
>  LIMIT 100;
> until I get an empty response, then I go back to the second method.

if your distribution is such that those skeys that have > 100 records
tend to have a lot more, you might have a higher limit for this case.


> All queries are extremely fast with 600 records and it looks like 
> the few redundant or empty queries (but very fast) will not be a problem.
> 
> What is your opinion about this (apart that it is a bit complex :) ??


looks fine

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Changed to: how to solve the get next 100 records problem

2005-05-18 Thread Ragnar Hafstað
On Wed, 2005-05-18 at 12:42 -0300, Alain wrote:
> I found something that is both fast and simple (program side):
> ...
> subsequent selects are
> (select ... from tab WHERE skey=skey_last AND pkey>pkey_last
>   ORDER BY skey,pkey LIMIT 100)
> UNION
> (select ... from tab WHERE skey>skey_last
>   ORDER BY skey,pkey LIMIT 100)
> ORDER BY skey,pkey LIMIT 100;
> ...
> The only strange thing is that without the 3rd order by, the order is 
> wrong. I didn't expect it because each select is created ordered. Is it 
> expected that UNION mixes it all up? (using postgre 7.4.1)

ORDER BY on subselects are not garanteed by SQL to have any effect,
but Postgres tends to do what you want when possible and not 
detrimental to performance.
In this case, Postgres would probably have kept the order had
you used UNION ALL

a plain UNION implies DISTINCT, which Postgres is free to
implement any way it wants, possibly destroying the order

in this case a UNION ALL is appropriate, as you know that
the 2 selects do not overlap. possibly, a future version
of the planner will be able to detect this.

in any case, the last ORDER BY LIMIT does not cost much, and it
protects you against implementation changes, and limits
the result to 100 records, which might be what you want.

> Please comment on this. I tested  and it worked but I really new to sql 
> and I feel insecure...

it's good.

gnari



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] left joins

2005-07-06 Thread Ragnar Hafstað
On Wed, 2005-07-06 at 11:33 +0100, Nick Stone wrote:
> I've had exactly yhe same problem - try changing the query to.
> 
> select count(*)
>  from  h left join p using (r,pos) and p.r_order=1
> where h.tn > 20
> and h.tn < 30

really ? is this legal SQL ?
is this a 8.0 feature ?
I get syntax error at or near "and" at character 41

gnari




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] ids from grouped rows

2005-07-20 Thread Ragnar Hafstað
On Wed, 2005-07-20 at 08:46 -0400, Lindsay wrote:

> SELECT name, MAX(age), id_for_row_with_max_age
> FROM Person
> GROUP BY name

how about:
  select distinct on (name) name, age, id 
 from person
 order by name, age desc;



gnari

 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] how to use column name with Case-sensitive with out usig

2005-08-06 Thread Ragnar Hafstað
On Sat, 2005-08-06 at 05:00 +, wisan watcharinporn wrote:
> how can i use
> 
> create table myName(
>myColumnName varchar(32)
> );
> 
> select myColumnName from myColumnName ;

Assuming you meant 'from myName' here,
this should work.

On the other hand, this will NOT work:

  create table "myName"(
 myColumnName varchar(32)
  );
  select myColumnName from myName ;


if you mean that you want some thing like:
  create table foo (
myColumnName varchar(32),
MYcOLUMNnAME varchar(32)
  );
without using quotes, then that is not
possible, per SQL specs.

gnari



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] insert only if conditions are met?

2005-08-31 Thread Ragnar Hafstað
On Wed, 2005-08-31 at 11:49 -0400, Henry Ortega wrote:
> What I am trying to do is
> * Insert a record for EMPLOYEE A to TABLE A 
> IF
> the sum of the hours worked by EMPLOYEE A on TABLE A
> is not equal to N
> 
> Is this possible?

Sure, given a suitable schema

It is not clear to me, if the hours worked are
to be found in the same table you want to insert
into, or not.

gnari




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] insert only if conditions are met?

2005-08-31 Thread Ragnar Hafstað
On Wed, 2005-08-31 at 12:49 -0400, Henry Ortega wrote:
> Ok. Here's TABLE A
> 
> empdate hours   type
> JSMITH   08-15-2005   5 WORK
> JSMITH   08-15-2005   3 WORK
> JSMITH   08-25-2005   6 WORK
> 
> I want to insert the ff:
> 1.) JSMITH08-15-20058VAC
> 2.) DOE08-16-20058VAC
> 
> #1 should fail because there is already 8 hours entered as being
> Worked on 08-15-2005 (same date).

sorry, did not notice the duplicates before my previous reply.

you could do something like
insert into A select 'JSMITH','08-15-2005',8,'VAC'
where 
   8 != (select sum(hours) FROM A 
WHERE emp = 'JSMITH' 
AND date = '8-15-2005');

gnari



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] case insensitive regex clause with some latin1 characters fails

2006-09-11 Thread Ragnar Österlund

Hi,

I'm not sure if this is a bug or if I'm doing something wrong. I have
a database encoded with ISO-8859-1, aka LATIN1. When I do something
like:

SELECT 'Ä' ~* 'ä';

it returns false. If i do:

SELECT 'A' ~* 'a';

I get true. According to specification, both should return true.
Anyone knows what the problem might be?

/Ragnar

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] wired behaviour

2008-11-28 Thread Ragnar Hafstað

On fös, 2008-11-28 at 15:22 +0100, Lutz Steinborn wrote:
> Hello Paul,
> 
> thanks for the quick answer.
> 
> > NULL values?
> Jepp, thats it.
> I've supposed this but can't believe it. So NULL is something out of this
> dimension :-)

Yes, that is one way of putting it.

A more useful way to look at it is to say that NULL represents an
UNKNOWN value.

rows with NULL bar value will not be counted by:
  
> > select count(*) from foo where bar = 'Mr Spock';

because you cannot know whether the UNKNOWN value is 'Mr Spock'.

these rows will neither be counted by:
> > select count(*) from foo where NOT bar = 'Mr Spock';

because you also cannot know that the UNKNOWN value is NOT 'Mr Spock'.

Does that make it any clearer?

gnari



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql