Re: [SQL] Controlling Reuslts with Limit

2001-02-23 Thread Bryan White



> Hi,
>  I was reading through Bruce's on line . I found follwing bit unclear...
>
> "Notice that each query uses ORDER BY . Although this clause is not
required,
> LIMIT without ORDER BY returns random rows from the query, which would be
> useless. "
>
> When I run a query several time  I get the same results as given
...
>  I just want to know what exatly --"LIMIT without ORDER BY returns random
rows
> from the query" --means

I don't think it is actually random.  It just that the order is not defined
and other events may change the order.  I believe that without an ORDER BY
or other clauses that cause an index to be used that the database tends to
return rows in the order stored on disk.  This order tends to be the order
in which rows were added.  My observation is this ordering is faily stable
and it seems to survive a database reload.  Just don't rely on it.  There is
a CLUSTER command to change the physical ordering.




[SQL] ORDER BY is case insensitive

2010-06-22 Thread Bryan White
I was suprised to find out that ORDER BY is case insensitive.  Is
there a way to do a case sensitive ORDER BY clause?

This transcript demonstrates what I am seeing:

$ createdb bryan
$ psql bryan
psql (8.4.4)
Type "help" for help.

bryan=# create table t (f text);
CREATE TABLE
bryan=# insert into t (f) values ('a');
INSERT 0 1
bryan=# insert into t (f) values ('b');
INSERT 0 1
bryan=# insert into t (f) values ('c');
INSERT 0 1
bryan=# insert into t (f) values ('B');
INSERT 0 1
bryan=# select * from t order by f;
 f
---
 a
 b
 B
 c
(4 rows)

bryan=# \q
~  $ psql -l
 List of databases
  Name   |  Owner   | Encoding |  Collation  |Ctype|
Access privileges
-+--+--+-+-+---
 bryan   | bryan| UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 postgres| postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0   | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
   :
postgres=CTc/postgres
 template1   | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
   :
postgres=CTc/postgres
(4 rows)



-- 
Bryan White

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ORDER BY is case insensitive

2010-06-24 Thread Bryan White
> regression=# SELECT * from test order by ascii(data);

That is what I was looking for.  Thanks to all.


-- 
Bryan White

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Client Logging

2000-06-09 Thread Bryan White

I sometimes need to be able to identify what client application is causing
messages that are appering in the backend log file.  To do this I want the
client to identify itself in the log file.

I can see 3 ways this could be implemented:
1) An identification string passed when a connection is established.  This
string would either be immediatly logged or included in the log entry for
any subsequent messages for that connection.
2) A sql statement to print to the log file.
3) A function that takes a string argument and calls elog.

#1 would be ideal but requires digging into the source code.  #2 seams wrong
headed.  I can implement #3 locally myself.

Before I go any farther, has something like this already been invented?




[SQL] A subselect in an aggregate

2000-06-26 Thread Bryan White

This statement works:
select date, (select sum(qty * price) from orderdetail d where d.orderid =
orders.orderid) from orders

But when I try to do something like this:

select date, sum(select sum(qty * price) from orderdetail d where d.orderid
= orders.orderid) from orders group by date

I get ERROR: parser: parse error at or near "select"

Is there a way to apply an agregate function to a subselect like this?





Re: [SQL] A subselect in an aggregate

2000-06-26 Thread Bryan White



> Bryan White wrote:
> >
> > This statement works:
> > select date, (select sum(qty * price) from orderdetail d where d.orderid
=
> > orders.orderid) from orders
> >
> > But when I try to do something like this:
> >
> > select date, sum(select sum(qty * price) from orderdetail d where
d.orderid
> > = orders.orderid) from orders group by date
> >
> > I get ERROR: parser: parse error at or near "select"
> >
> > Is there a way to apply an agregate function to a subselect like this?
>
> Avoiding the question, I'm wondering if this simpler form wouldn't be
> what you're after?
>
> select o.date, sum(d.qty * d.price)
> from orderdetail d, orders o
> where d.orderid = o.orderid
> group by o.date

This is very slow (acutally I killed it after about 5 minutes):
select o.date,sum(d.qty * d.price) from orderdetail d,orders o where o.date
= '6/1/2000' group by o.date;
Explained:
Aggregate  (cost=0.00..41074641.24 rows=120300337 width=20)
  ->  Group  (cost=0.00..38067132.80 rows=1203003374 width=20)
->  Nested Loop  (cost=0.00..35059624.37 rows=1203003374 width=20)
  ->  Index Scan using iorddate3 on orders o
(cost=0.00..6917.34 rows=1808 width=4)
  ->  Seq Scan on orderdetail d  (cost=0.00..12733.78
rows=665378 width=16)

Doing a squential scan of orderdetail inside a loop seems to be what kills
it.

This is quick (it takes a couple of seconds):
select o.date,(select sum(od.qty * od.price) from orderdetail od where
od.orderid = o.orderid) from orders o where o.date = '6/1/2000';
Explained:
Index Scan using iorddate3 on orders o  (cost=0.00..6917.34 rows=1808
width=8)
  SubPlan
->  Aggregate  (cost=45.24..45.24 rows=1 width=16)
  ->  Index Scan using iodid on orderdetail od  (cost=0.00..45.21
rows=11 width=16)

I suppose I can do my own grouping on the result data but I would like to
avoid that if I could.  Which brings me back to the original question:  Is
there a way to apply an aggregate function to a subselect?






Re: [SQL] Backup?

2000-06-27 Thread Bryan White

> Hello,
>
> I had a look for 'backup' on the mailing list archives and strangely
> enough didn't find a message containing the word in admin, general
> and sql mailing lists... I am sure this must have been asked before,
> but still:
>
> Would it be possible to backup a db by just copying the dir struct
> under the PG_DATA or must I absolutely use pg_dump? I am
> using Postgres 7.0. The purpose of the backup is of course to have
> a copy handy if eg the harddrive breaks. As I understand pg_dump
> could be comparatively slower, plus it produces a text format
> dumpfile, which could be kind of large in my db in future.

The pg_dump will in general be smaller.  There is a fair amount of overhead
in fitting rows into page boundaries.  Also the indexes are not backed up.
Here are some datapoints from my system.  I have a datadirectory that is
2.4GB.  A pg_dump from that is 800MB (about 400MB compressed).  My nightly
dump currently takes about 10 minutes during which the database is not down.
A reload takes 100 minutes including a vacuum analyze.  I also sleep better
knowing the data is in a readable text format that I could work with if I
had to.