[SQL] can you change an FK constraint from NOT DEFERRABLE to DEFERRABLE

2005-03-30 Thread frank
Hello. Is it possible to change an FK constraint from NOT DEFERRABLE
(the default) to DEFERRABLE without dropping and re-creating it? One idea
that came up was to create a parallel set of constraints which perform
the same checks as the existing ones as DEFERRABLE (and then drop the
old set), but the objection there was that it'd lock the tables during
the initial check.

We're having a fairly serious deadlock issue and the thinking goes that
Tom's suggestion here

http://www.webservertalk.com/archive139-2004-8-364172.html

to defer FK checks until transaction commit would maybe help. Right now
we can't try this because all the FK checks where created with default
settings. We'd like to avoid taking the database down for recreating
foreign keys.

Regards,

Frank

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] can you change an FK constraint from NOT DEFERRABLE to DEFERRABLE

2005-03-30 Thread frank
On Wed, Mar 30, 2005 at 11:48:31AM +0200, PFC wrote:
> 
>   Checks the docs on ALTER TABLE ... ALTER CONSTRAINT ...

ALTER CONSTRAINT? I did check for that, and it does not appear to
exist?! That's why I asked ...

Rgds, Frank

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] making two columns out of one

2000-09-11 Thread Frank Bax

I've got a table containing some timesheet data.
Fields are emp, earncode, lo_shift, hi_shift.

SELECT emp, sum(hi_shift - lo_shift) as reghrs from timesheet 
where earncode in ('R', 'C', 'X') order by emp

will give me all the regular hours

SELECT emp, sum(hi_shift - lo_shift) as ovrhrs from timesheet 
where earncode not in ('R', 'C', 'X') order by emp

will give me all the overtime hours.

How do I combine these to get one result set with emp, reghrs, ovrhrs on
each row.

Frank





   1stUp.com - Free the Web
   Get your free Internet access at http://www.1stUp.com



Re: [SQL] making two columns out of one

2000-09-12 Thread Frank Bax

It took quite a bit of trial and error, but I finally got it!

SELECT  emp, 
  SUM (CASE WHEN  earncode IN ('R','C','X')  THEN  (hi_shift-lo_shift)
ELSE  0  END)  AS  reghrs,
  SUM (CASE WHEN  earncode IN ('R','C','X')  THEN  0  ELSE
(hi_shift-lo_shift)  END)  AS  ovrhrs 
  FROM  timesheet
  GROUP BY  emp
  ORDER BY  emp;


At 09:34 AM 9/11/00 -0400, Frank Bax wrote:
>I've got a table containing some timesheet data.
>Fields are emp, earncode, lo_shift, hi_shift.
>
>SELECT emp, sum(hi_shift - lo_shift) as reghrs from timesheet 
>where earncode in ('R', 'C', 'X') order by emp
>
>will give me all the regular hours
>
>SELECT emp, sum(hi_shift - lo_shift) as ovrhrs from timesheet 
>where earncode not in ('R', 'C', 'X') order by emp
>
>will give me all the overtime hours.
>
>How do I combine these to get one result set with emp, reghrs, ovrhrs on
>each row.
>
>Frank



Re: [Fwd: Re: [SQL] no ORDER BY in subselects?]

2000-09-21 Thread Frank Bax

At 11:29 AM 9/21/00 +1000, you wrote:
>The main reason I use them is to find the 'next' or 'previous' record in a
>list (eg. next date, next ID). eg.
>
>  select , (select ID from table where id > this.id 
>  order by id asc limit 1) as next_id ...

Doesn't this give the same result (without order by):

>  select , (select min(ID) from table where id > this.id) as
next_id

Frank




Re: [SQL] COUNT

2000-10-19 Thread Frank Bax


Select count(*) from Table

At 04:58 AM 10/20/00 +, Craig May wrote:
>Hi,
>
>How do I get a row count, like "Select [COUNT] from Table" ??
>
>Regards,
>Craig May
>
>Enth Dimension
>http://www.enthdimension.com.au
>
>



[SQL] How to represent a tree-structure in a relational database

2000-12-13 Thread Frank Joerdens

I am just thinking about the data model for a little content management system that I 
am
currently planning. Individual articles are sorted under different categories which 
branch
into subcategories, sub-subcategories etc. up to a depth of about 6 or 7 levels. The
structure should be extensible, i.e. it must be possible to add levels. What I am 
thinking
now is that you would keep the index in a separate index table (linked with the primary
key in the articles table), which would have 6 or 7 fields initially, and that you'd 
add
columns with the alter table command, if need be, to make the structure deeper. Is this
the recommended way to go about it? It feels pretty 'right' to me now but since the
problem should be fairly common, there must be other people who have thought and 
written
about it and there might even be a recognized 'optimal' solution to the problem.

Comments?

- Frank



Re: [SQL] How to represent a tree-structure in a relational database

2000-12-13 Thread Frank Joerdens

On Wed, Dec 13, 2000 at 11:38:18AM -0800, Stuart Statman wrote:
[ . . . ]
> I would suggest, instead, to create a table that represents your hierarchy
> without adding columns. For example :
> 
> create table Category (
> CategoryID   int4  not null  primary key,
> ParentCategoryID int4  not null  REFERENCES Category (CategoryID),
> CategoryName varchar(100)
> );
> 
> Add a CategoryID with an FK reference to this table, and your work is done.
> 
> Then adding, inserting, removing, or moving layers in the hierarchy becomes
> quite simple. This also preserves hierarchical integrity, where subcategory
> a of subcategory b will also remain a subcategory of category c if
> subcategory b is a subcategory of subcategory c, where I'm not sure your
> model will preserve or guarantee that. (Does that sentence deserve a prize?)

Cool. That looks like my solution. I had actually seen it someplace
before, but didn't make the connection with my problem. 

Ta, Frank



Re: [SQL] How to represent a tree-structure in a relational database

2000-12-13 Thread Frank Joerdens

On Wed, Dec 13, 2000 at 11:04:13AM -0800, Josh Berkus wrote:
> Frank,
> 
>   Please look in the list archives.  About 2 months ago this topic came
> up and was discussed extensively (including a creative solution by yours
> truly).

Hm, neither my archives nor a search on the postgresql.org page turned
up the thread you mention. Do you recall which list it was and what the
title of the thread was?

Thanks, Frank



[SQL] Subqueries in Non-SELECT Queries

2000-12-17 Thread Frank Joerdens

I am trying to do something that looks to me exactly like what's described in chapter 
8.4
in the Postgresql book:

test=> UPDATE salesorder
test-> SET ship_date = '11/16/96'
test-> WHERE customer_id = (
test(> SELECT customer_id
test(> FROM customer
test(> WHERE name = 'Fleer Gearworks, Inc.'
test(> );
UPDATE 1

Here only one row is updated. This should work with more than one, right? I get an 
error
which suggests that it doesn't:

mpi=# update index set level = 2 where parentid = ( select id from index where level = 
1
);
ERROR:  More than one tuple returned by a subselect used as an expression.
mpi=#

This is a recursive query, on a single table. Is that not possible?

I am using 7.1.

- Frank



Re: [SQL] Subqueries in Non-SELECT Queries

2000-12-18 Thread Frank Joerdens

Christof Glaser wrote:
[ . . . ]
> since the where clause compares just one field for equality. You might
> try IN instead of =, ie
> 
> update index set level = 2 where parentid in ( select id from
> index where level = 1 );

That was it! Thanks, Frank



[SQL] Tree structure table normalization problem (do I need a trigger?)

2000-12-18 Thread Frank Joerdens

In a recent thread (How to represent a tree-structure in a relational
database) I asked how to do a tree structure in SQL, and got lots of
suggestions (thanks!), of which I chose the one below:

create table Category (
CategoryID   int4  not null  primary key,
ParentCategoryID int4  not null  REFERENCES Category (CategoryID),
CategoryName varchar(100)
);

The one described in Joe Celko's article (the one with the worm that
travels along the edge of the tree . . . ) seemed more evolved but
requires fairly complex SQL stuff, I thought, for simple operations that
are straighforward with the above model. However, I have a problem now
which seems non-trivial: I am at some point in the tree, say 3 nodes
down from the root, but I don't know where I am exactly (across which
nodes would I travel along the shortest path to the top?) and would like
to find out. This is, again, not really difficult if I know how deep
into the tree I am, in which case I can simply do (I know that I am 3
nodes from the root and that my current node number is x):

SELECT A1.CategoryID, A2.CategoryID, A3.CategoryID FROM Category AS A1,
Category AS A2, Category AS A3 WHERE A3.CategoryID=x AND
A3.ParentCategoryID=A2.CategoryID AND A2.ParentCategoryID=A1.CategoryID;

(This is probably very expensive if the tree gets really deep, but I
don't expect that to happen in my database anytime soon.)

So I introduced a 'level' column into the above schema, where I store
the information about how deep I am into the tree to have it readily
available when I want to compute the path to the top. Unfortunately,
this is dangerous because the information is already implicit in the
original schema. The problem is that the only way I can see that you
would get at the information is by walking up the tree step-by-step
until you get a zero value (which is assigned to the root). This means
you need a loop control structure which means you have to write a
PL/pgSQL procedure (or some other procedure) that is run by a trigger to
update the level column on insert or update, as in

WHILE (CategoryID is not null) LOOP
Run SQL statement to get the next higher-up node's CategoryID and
increment a counter.
END LOOP;
Return counter and insert value into level column.

This seems to feasible but not really as straightforward as one might
hope. Is there an easier way?

- Frank



Re: [SQL] Tree structure table normalization problem (do I need atrigger?)

2000-12-19 Thread Frank Joerdens

Tulassay Zsolt wrote:
[ . . . ]
> The SQL stuff of that nested set structure is fairly easy, I wrote some
> quick'n'dirty plpgsql functions that will do inserts, updates, deletes
> from the tree, display level number etc.

What scared me about it in particular was one scenario where you try to delete a 
subtree.
This would normally leave a gap, since the structure is based on the worm's ability to 
get
from one node to the next with an increment of just 1. Once you had a subtree deleted,
you'd either would have to have the worm leap-frog (a leaping frog-worm?!! :)) the gap 
or
update an entire half of the tree to close it . . . then my brain started to hurt and I
gave up.

> I can send it to you if you like (please allow a few days since I
> have several exams at the university this week).

Sure, I'd like to have a look at it!

Thanks, Frank



Re: [SQL] Tree structure table normalization problem (do I need atrigger?)

2000-12-19 Thread Frank Joerdens

> Michael Ansley wrote:
> 
> Hi, Frank,
> 
> This is exactly why there are alternative solutions for trees.  The mechanism that 
>you
> are using traded input speed for 'queryability', and this is where you start to run 
>into
> problems.  Either you need to store redundant information (i.e.: the level) or you 
>need
> to have a recursive or iterative procedure which steps back up to the top.
> 
> Just for illustration, if you were using a node type to indicate the ID of each node
> (say, something like 1, 1.1, 1.2, 1.2.1, etc.) then input into the table would be an
> expensive operation, but querying this would involve only getting out the ID of the 
>node
> that you were at, and splitting up the ID using the periods for separators.  So if 
>you
> wanted to traverse up the tree from node 1.2.5.3, (you know that it's at level three
> simply by counting the periods)then the order would be:
> 
> 1.2.5.3
> 1.2.5
> 1.2
> 1

It almost looks my very first attempt to do a tree, except that I had the node number 
not
in one column but a column for each number:

Instead of 

1.2.5.3 
1.2.5 
1.2 
1 

I had

1|2|5|3 
1|2|5|0
1|2|0|0 
1|0|0|0


> And you only need the original node ID to do that, which is a single query.
> 
> If you are not going to be doing this query often, and/or there are not going to be 
>many
> levels, then you should probably consider the plpgsql solution, as it makes life 
>simple
> for the programmer, and the cost is not really an issue, and you don't have to 
>maintain
> redundant data.  Denormalisation is a major pain when you start having to maintain 
>it.

What exactly is the problem with it (are you referring to the scheme I proposed or to 
the
1.2.3-type scheme)? Actually, I don't really know now why I left my original idea. I 
guess
it looked to simple.

I think I will stick with my recursive parentid PL/pgSQL-trigger type schema for now. 
I'll
just have to get into that PL/pgSQL stuff which I haven't properly looked at so far. 
Would
this be a denormalized table then? The level information would be redundant but linked
with the other columns via the trigger, hence there's nothing much that could go wrong 
. .
. or is there a hidden maintenance penalty?

Cheers, Frank



Re: [SQL] Tree structure table normalization problem (do I need a trigger?)

2000-12-19 Thread Frank Joerdens

Josh Berkus wrote:
[ . . . ]
> This is exactly why my model includes a "Level" column.

I looked at your post from a few days ago again; you did indeed explain about the level
column. I missed that somehow and had to reinvent the wheel . . .

> > This means
> > you need a loop control structure which means you have to
> > write a
> > PL/pgSQL procedure (or some other procedure) that is run
> > by a trigger to
> > update the level column on insert or update, as in
> 
> > This seems to feasible but not really as straightforward
> > as one might
> > hope. Is there an easier way?
> 
> Hmmm.  I don't know, Frank.  That strikes me as a really
> good, straightforward workaround to your problem.  I'm not
> sure what you could do that would be simpler.  This is
> practically a textbook example of why triggers are necessary
> to retain relational integrity.

Cool. And I didn't consult a textbook ;). Actually, it's even simpler than I described
above: The function that you run when the trigger fires is plain vanilla sql with a 
littel
subselect thrown in:

create function update_level(int4)
returns int4
as 'update index set level=(A.level+1) from index as A where A.id = (select parentid 
from
index where id = $1 ) and index.id = $1; select 1 as ignore_this;'
LANGUAGE 'sql';

 . . . i.e. you just get the level from the higher-up node's level plus 1, rather than
walking to the top of the tree and counting the steps. This _doesn't_ work though if 
you
move an entire subtree within the hierarchy to another level. Then you'd need to have a
function that walks through the entire subtree to update the level column for every 
single
node . . . hmmm. I'll think about it. I don't think I'll need it for the current 
project
since I'll only allow the moving around of end nodes.

Cheers,
Frank



[SQL] Making a foreign key chain - good idea or bad idea?

2001-01-24 Thread Frank Joerdens

I just did something which seems to work alright and which makes sense
to me now but which I have a funny feeling about. It may be good
standard practice (and I just don't know about it) or dangerously
foolish or just plain silly: I created a foreign key reference on a
column that is also the primary key for this table as in

create table institute (
id  int4 references index ( id ) PRIMARY KEY,
 . . .

and then used that column as a reference for a foreign key constraint in
a couple of other tables:

create table boss (
institute_id   int4 references institute ( id ),
 . . . 

create table staff (
institute_idint4 references institute ( id ),
 . . . 

I am not really sure what happens when I delete or modify the id column
in the table at the top of this "chain". Except for this uncertainty I
don't think this scheme would pose a problem, but I may be wrong. Can
anyone enlighten me?

Ta, Frank 



[SQL] combining

2001-02-12 Thread Frank Morton

I'll really appreciate help on this if anyone will do so. I'm
used to single-table stuff, but not sure the best way to
do things with multiple tables, but here goes:

Given two tables with a common "id" field, first table columns:

id
lastname

Second table columns:

id
type
content

These two queries get me what I want from each table:

select unique id from table1 where lastname='morton';
select unique id from table2 where ((type = "pie") and (content = 'apple));

What is the best way to combine these into one query?

Thanks.

Frank





[SQL] displaying constraints

2001-02-14 Thread Frank Joerdens

The only what that I am aware of in which you can display the
constraints that have been created for a table is by dumping out the
schema with the -s option and then trying to reconstruct the foreign key
references from the  triggers which where created by the e.g.

some_column int4 references other_table ( other_column ),

statements in the queries which created the database or table. This is
considerably harder than just looking at the sql which created the
table, as I am just figuring out the hard way. I'll be recreating my
entire database from .sql files in order to have a proper documentation
as to the database structure. I'd rather not, but it seems just plain
too difficult to reconstruct everything that I did via alter table . . .
I am not really certain whether I can exactly reverse-engineer all the
foreign keys from the triggers.

Is there a better solution to my problem?

Cheers, Frank



Re: [SQL] two tables - foreign keys referring to each other...

2001-02-21 Thread Frank Joerdens

On Tue, Feb 20, 2001 at 11:34:30PM -0800, Stephan Szabo wrote:
> 
> You have to use ALTER TABLE to add the constraint to one of the tables.

Maybe I am stating the obvious but you should make sure that you include
the ALTER TABLE statements in the *.sql files that you use to create the
tables, rather than running them from the psql prompt. Otherwise you'll
have trouble to figure out what exactly you did when you come back to
the database later; as foreign keys don't show up as 'foreign keys' in
schema dumps, but as a set of  triggers. Those are quite hard
to read, or interpret as what they essentially are, i.e. foreign keys
(depending on your philosophical outlook, that is, whether you consider
the essence of your foreign keys to be a set of triggers, or vice versa
;-)).

Regards, Frank



Re: [SQL] logging a psql script

2001-02-21 Thread Frank Joerdens

On Wed, Feb 21, 2001 at 04:51:00PM -0500, Ken Kline wrote:
> Hello,
>I would like my psql script to log everything that it does.
> I set the following
> 
> \set ECHO all
> \o foo.txt
> \qecho
> 
> some sql, some ddl, etc...
> 
> \o
> 
> 
> But foo.txt only contains
> 
> DROP
> DROP
> DROP
> CREATE
> CREATE
> CREATE
> 
> I want it to contain everything that I see on the screen, what am I
> missing?

Dunno how you do it via \o; what I do is run the postmaster with the
-d 2 option and then log everything to syslogd. This will log every query
in detail. It's very convenient while you're developing and testing,
especially if you run a separate window with

tail -f /wherever/you/write/your/postgres.log

Regards, Frank



[SQL] Weird NOT IN effect with NULL values

2001-03-01 Thread Frank Joerdens

When doing a subselect with NOT IN, as in

SELECT name
FROM customer
WHERE customer_id NOT IN (
SELECT customer_id
FROM salesorder
);

(from Bruce Momjian's book)

I get no rows if the result column returned by the subselect
contains NULL values. It works as expected if I remove the NULL values
from the result set. Is this behaviour correct and if so, why?

I am using 7.1 beta 4.

Regards, Frank



[SQL] How to drop an trigger

2001-03-08 Thread Frank Joerdens

I've created quite a few foreign key constraints in the database that I
am currently working on, and now that I've altered the structure and
dropped a table that had a foreign key reference to a couple of other
tables, I need to get rid of those foreign keys (they weren't dropped
automagically with the table), as I get errors on trying to update those
tables.

Trouble is that the foreign keys show up in a schema dump as 
triggers (AFAIK there is no other way to display foreign key
constraints) which I don't know how to drop. Here's an example:

\connect - frank
--
-- TOC Entry ID 56 (OID 52367)
--
-- Name: "RI_ConstraintTrigger_52366" Type: TRIGGER Owner: frank
--

CREATE CONSTRAINT TRIGGER "" AFTER DELETE ON "index"  NOT
DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_cascade_del" ('', 'legende', 'index', 'UNSPECIFIED',
'platz', 'id');

I tried dropping it with 

=# drop trigger RI_ConstraintTrigger_52366 on index;

which fails with

ERROR:  DropTrigger: there is no trigger ri_constrainttrigger_52366 on
relation index

What to do? And more broadly, what's the recommended way to deal with
this in general? Avoid creating  triggers by always creating named
foreign keys with something like

ALTER TABLE distributors ADD CONSTRAINT distfk FOREIGN KEY (address)
REFERENCES addresses(address) MATCH FULL;

(from Bruce's book)?

Regards, Frank

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Re: How to drop an trigger

2001-03-08 Thread Frank Joerdens

On Thu, Mar 08, 2001 at 04:42:46PM +0100, Frank Joerdens wrote:
> I've created quite a few foreign key constraints in the database that I
> am currently working on, and now that I've altered the structure and
> dropped a table that had a foreign key reference to a couple of other
> tables, I need to get rid of those foreign keys (they weren't dropped
> automagically with the table), as I get errors on trying to update those
> tables.

Just an idea: Is it safe to just delete the corresponding row in
pg_trigger?

Regards, Frank

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Re: How to drop an trigger

2001-03-08 Thread Frank Joerdens

On Thu, Mar 08, 2001 at 05:06:29PM +0100, Frank Joerdens wrote:
> On Thu, Mar 08, 2001 at 04:42:46PM +0100, Frank Joerdens wrote:
> > I've created quite a few foreign key constraints in the database that I
> > am currently working on, and now that I've altered the structure and
> > dropped a table that had a foreign key reference to a couple of other
> > tables, I need to get rid of those foreign keys (they weren't dropped
> > automagically with the table), as I get errors on trying to update those
> > tables.
> 
> Just an idea: Is it safe to just delete the corresponding row in
> pg_trigger?

No, it ain't: After deleting the rows corresponding to the offending
triggers in pg_trigger, I can't vacuum, or dump.

- Frank

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Max Conncection limit?

2001-03-12 Thread Frank Joerdens

On Tue, Mar 13, 2001 at 11:33:10AM -0500, Najm Hashmi wrote:
> We  start a server and initiate 30 connections(with the libpq C interface)
> which are  stored in a stack to  use and  and to be reused.
> 
> After awhile I try to start another server that will also try to initiate 30
> connections, for each connection I then get a PQErrorMessage like that :
> "Sorry, too many connections"
> 
> if I do a ps -ax at that time I get a lot of /usr/bin/postgres 192.168.0.1
> user name passwd   idle
> 
> what I usually do is restart postgres but I wish I didnt have to do that...
> That is not an elegant solutuion any ways.
> What can I do? Thanks in advance. Regards

Have a look at the runtime-config.html in the admin section of the
manual:

MAX_CONNECTIONS (integer)

Determines how many concurrent connections the database server will
allow. The default is 32. There is also a compiled-in hard upper limit
on this value, which is typically 1024 (both numbers can be altered when
compiling the server). This parameter can only be set at server start. 

Regards, Frank

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Order by email address by domain ?

2001-05-10 Thread Frank Bax

Why is everyone else suggesting new functions?  This works (in 6.5.3):

   ORDER BY lower(substring(email from position('@' in email)+1 )),
lower(email)

remove the lower() functions if you don't need them (I had mixed case
addresses).

I am guessing/assuming that it's cheaper to just use entire email address
in second key rather than extract before the '@' character.

Frank

At 08:37 PM 5/10/01 +0200, you wrote:
>Hi,
>
>I just want to order by a listing of email address by domain like :
>
>[EMAIL PROTECTED]
>[EMAIL PROTECTED]
>[EMAIL PROTECTED]
>[EMAIL PROTECTED]
>[EMAIL PROTECTED]
>
>Is it possible and how ?
>
>Thanks !
>-- 
>Hervé Piedvache
>
>Elma Ingenierie Informatique
>6, rue du Faubourg Saint-Honoré
>F-75008 - Paris - France 
>http://www.elma.fr
>Tel: +33-1-44949901
>Fax: +33-1-44949902 
>Email: [EMAIL PROTECTED]
>
>---(end of broadcast)---
>TIP 6: Have you searched our list archives?
>
>http://www.postgresql.org/search.mpl
>

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] Re: START for SERIAL type?

2001-06-05 Thread Frank Contrepois

read the doc section datatype!!!
serial just create a sequence on a standard way, if you dont like it you'll
have to create your sequence.
bye
--
--

"L'idea di base è estremamente semplice..."
©opyright Frank "Pazzooo" Contrepois (schiavista francese del '900)




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Re: INSERT value of type table

2001-06-05 Thread Frank Contrepois

> exemple:
> CRETAE TABLE Address (Number integer, Street varchar, city varchar, contry
> varchar);
> CREATE TABLE Person (Name varchar, addr Address);
> INSERT INTO addrerss VALUES ( 5, 'rue du pont',  'Lyon', 'France');
> The question is :
> How we can insert a new person whose address is that inserted in the
> table of the addresses.

CREATE TABLE Address (Number integer PRIMARY KEY, Street varchar, City
varchar, Country varchar)
CREATE TABLE Person (Name varchar, addr integer REFERENCES TO Address)

this is called a FOREIGN KEY.. look at some doc for it. this is the way
Relational DB works
bye
--
--

"L'idea di base è estremamente semplice..."
©opyright Frank "Pazzooo" Contrepois (schiavista francese del '900)




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Re: Is it possible to defer triggers?

2001-06-05 Thread Frank Contrepois

read this:
http://www.postgresql.org/idocs/index.php?sql-createtrigger.html

bye
--
--

"L'idea di base è estremamente semplice..."
©opyright Frank "Pazzooo" Contrepois (schiavista francese del '900)




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Re: Slow inserts/deletes

2001-06-05 Thread Frank Contrepois

If you use index on this table it's better to:
1) drop all index on the table
2) do the insert
3) (re)create the indexes


--
--

"L'idea di base è estremamente semplice..."
©opyright Frank "Pazzooo" Contrepois (schiavista francese del '900)




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] VARCHAR vs TEXT

2001-10-16 Thread Frank Bax

Perhaps 'limits' should be part of FAQ, not separate entity?

Also a reference (or link) to 'limits' from other sections such as
mentioned below may be more appropriate than duplicating the information.

Frank

At 08:56 AM 10/16/01 -0400, you wrote:
>> >>>>> "BM" == Bruce Momjian <[EMAIL PROTECTED]> writes:
>> 
>> BM> CHAR() is best when storing strings that are usually the
>> BM> same length.  VARCHAR() is best when storing variable-length
strings,
>> BM> but you want to limit how long a string can be.  TEXT is for
strings
>> BM> of unlimited length, maximum 1 gigabyte.  BYTEA is for storing
>> BM> binary data, particularly values that include NULL bytes.
>> 
>> Could you add the length limitation for TEXT to the reference manual?
>> I searched high and low for that limit, but never found it.  Also,
>> what's the max VARCHAR() or CHAR() I can create?  Is that also 1Gb?
>
>TEXT limit is 1GB, as shown on the 'limits' FAQ item.  Is it worth
>mentioning here?  CHAR()/VARCHAR() also 1GB limit.
>
>-- 
>  Bruce Momjian|  http://candle.pha.pa.us
>  [EMAIL PROTECTED]   |  (610) 853-3000
>  +  If your life is a hard drive, |  830 Blythe Avenue
>  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026
>
>---(end of broadcast)---
>TIP 4: Don't 'kill -9' the postmaster
>

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[SQL] Text/Image, JSP tomcat. How can I operate the text and image type field in Postgresql? only in java/jsp

2001-10-16 Thread Frank Zhu

I want to store a long article in the Postgresql in Linux, how can I put the
content into it and redraw it back to show? urgent. Thanks.
I use JSP.
I note that all database systems are very dull in BLOB. Why?

Frank Zhu.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] VARCHAR vs TEXT

2001-10-16 Thread Frank Zhu

I want to store a long article in the Postgresql in Linux, how can I put the
content into it and withdraw it back to show? urgent. Thanks.
I use JSP.
I note that all database systems are very dull in BLOB, but we need it to
make a good system. At least, the documentation/faq should have much on it.
thanks

Frank Zhu.

"Bruce Momjian" <[EMAIL PROTECTED]> дÈëÏûÏ¢ÐÂÎÅ
:[EMAIL PROTECTED]
> > On Wed, 10 Oct 2001, Aasmund Midttun Godal wrote:
> >
> > > I am sure this question has been answered in some form or another
> > > before, but I can't really find anything on exactly this issue.
> > >
> > > Are there any differences between varchar and text other than
> > >
> > > 1. varchar has limited size
> > > 2. varchar is SQL 92 text is not?
> > >
> > > Especially regarding performance.
> > >
> > > Or am I correct to assume that if you need a place to store some text,
> > > and you are not sure how much (like an email address or a name) you
> > > are best off using text?
> >
> > Pretty much yes.  text and varchar are pretty equivalent other than
> > the fact that varchar specifies a maximum size.
>
> I have added the following paragraph to the FAQ:
>
> CHAR() is best when storing strings that are usually the
> same length.  VARCHAR() is best when storing variable-length strings,
> but you want to limit how long a string can be.  TEXT is for strings
> of unlimited length, maximum 1 gigabyte.  BYTEA is for storing
> binary data, particularly values that include NULL bytes.
>
> --
>   Bruce Momjian|  http://candle.pha.pa.us
>   [EMAIL PROTECTED]   |  (610) 853-3000
>   +  If your life is a hard drive, |  830 Blythe Avenue
>   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] join question

2002-10-18 Thread Frank Morton



For the SQL gurus, a query where I'm not getting 
the expected
results. Trying to write it using sql compatible with both postgres
and mysql.
 
There are two tables:
 
table = profile
int id
char name
 
table = attribute
int id
int containerId
char name
char value
 
Multiple attribute rows correspond to a single 
profile row where
attribute.containerId = profile.id
 
These two queries result in one row being returned, 
which is
the expected result:
 
select name from profile where ((profile.state='1020811'));select 
profile.name from profile,attribute where (((attribute.name='description') and 
(attribute.value='1020704') and (profile.id=attribute.containerId)));
But, I thought this next query would just be a 
simple way to combine the two
queries with an "or" operator, still returning one 
row, actually returns
ALL rows of attribute:
select profile.name from profile,attribute where 
((profile.state='1020811') or ((attribute.name='marketsegment') and 
(attribute.value='1020704') and (profile.id=attribute.containerId)));
 
Why doesn't this last query return just one 
row?
 
TIA
 


Re: [SQL] how can I improve the speed of this query

2002-11-19 Thread Frank Bax
Problem could be the SeqScan on visitor (very last line of explain).  This 
appears to be coming from the last join in your query.  Does an index on ID 
exist on Visitor table?

Does changing:
WHERE "VisitorExtra"."ID"="Visitor"."ID" AND
"VisitorExtra"."ID"= 325903;
to
WHERE "VisitorExtra"."ID"="Visitor"."ID" AND
"Visitor"."ID"= 325903;

have any effect?

Why do you use "From VisitorExtra" in first select when no result fields 
are in the table?  Why not replace all references to "VisitorExtra"."ID"to 
"Visitor"."ID"?

Frank



At 03:33 PM 11/15/02, Peter T. Brown wrote:

Hi--

I have this rather long complex query that takes really long to complete
(please see below). It seems like I ought to improve the speed somehow.
I don't understand, for example, what the query planner is doing when it
says "Hash" and why this appears to take so long. And since I have a key
for Visitor.ID, I don't understand why its doing a sequential scan on
that table...

Any advice would be greatly appreciated!


Thanks

Peter




EXPLAIN SELECT
"Visitor"."Created",
"Visitor"."Updated",
"Tidbit"."ID",
"ProgramEvent"."ID",
"Visitor"."Email",
"Interest"."ID",
"VisitorInternetDeviceAssoc"."ID",
"Referral"."ID"

FROM "VisitorExtra"

LEFT OUTER JOIN Tidbit" ON
"VisitorExtra"."ID"="Tidbit"."VisitorID"

LEFT OUTER JOIN "ProgramEvent" ON
"VisitorExtra"."ID"="ProgramEvent"."VisitorID"

LEFT OUTER JOIN "Interest" ON
"VisitorExtra"."ID"="Interest"."VisitorID"

LEFT OUTER JOIN "VisitorInternetDeviceAssoc" ON
"VisitorExtra"."ID"="VisitorInternetDeviceAssoc"."VisitorID"

LEFT OUTER JOIN "Referral" ON
"VisitorExtra"."ID"="Referral"."FromVisitorID","Visitor"

WHERE "VisitorExtra"."ID"="Visitor"."ID" AND
"VisitorExtra"."ID"= 325903;


NOTICE:  QUERY PLAN:

Hash Join  (cost=14584.37..59037.79 rows=57747 width=76)
  ->  Merge Join  (cost=0.00..36732.65 rows=57747 width=44)
->  Merge Join  (cost=0.00..29178.16 rows=10681 width=36)
  ->  Nested Loop  (cost=0.00..10505.74 rows=6674 width=28)
->  Nested Loop  (cost=0.00..435.29 rows=177
width=20)
  ->  Nested Loop  (cost=0.00..15.70 rows=55
width=12)
->  Index Scan using VisitorExtra_pkey
on VisitorExtra  (cost=0.00..3.01 
rows=1width=4)
->  Index Scan 
using
Tidbit_VisitorID_key on 
Tidbit  (cost=0.00..12.67 rows=2
width=8)
  ->  Index Scan 
using
ProgramEvent_VisitorID_key on 
ProgramEvent(cost=0.00..7.57
rows=2 width=8)
->  Index Scan using Interest_VisitorID_key on
Interest  (cost=0.00..56.66 rows=19 width=8)
  ->  Index Scan using VisitorInternetDeviceAssoc_Visi on
  VisitorInternetDeviceAssoc  (cost=0.00..16402.90 
rows=174887
width=8)
->  Index Scan using Referral_FromVisitorID_key on Referral
(cost=0.00..6323.41 rows=87806 width=8)
  ->  Hash  (cost=6061.79..6061.79 rows=317379 width=32)
->  Seq Scan on Visitor  (cost=0.00..6061.79 rows=317379
width=32)



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Backup to data base how ?

2002-12-11 Thread Frank Bax
At 10:36 PM 12/11/02, ksql wrote:

This is vital for my, please send me an example
 about like I can make me to support my database with name WAREHOUSE



pg_dump -o -c warehouse  |  gzip  >  warehouse.gz

http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/backup.html

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] union query doubt:

2002-12-11 Thread Frank Bax
At 11:21 AM 12/11/02, javier garcia wrote:

I've got a table with three fields: DATE, POINT, FLOW. The POINT field can
have values among 1 and 6. So, for a same date I have six different points
with the correspondings flows.
I would like to make a query to obtain something like:
DATE POINT1 POINT2 POINT3 POINT4 POINT5 POINT6

where for a date I have the flows data of the different points.



SELECT date,
case when point = 1 then flow else 0 end as flow1,
case when point = 2 then flow else 0 end as flow2,
case when point = 3 then flow else 0 end as flow3,
case when point = 4 then flow else 0 end as flow4,
case when point = 5 then flow else 0 end as flow5,
case when point = 6 then flow else 0 end as flow6
from samples


There have been several messages recently about this - search on crosstab 
or pivot - a couple of other options were presented.

Frank

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] convert from an integer to a date

2003-02-17 Thread Frank Bax
At 04:33 PM 2/14/03, [EMAIL PROTECTED] wrote:

When I run select to_date('20030212','MMDD') the output is 2/12/03
if I run select to_date( to_char(20030212,),'MMDD'); the 
output is 6/23/05

How can I convert from integer into date format correctly?


Same as the first one, except leave out the quotes.
select to_date(20030212,'MMDD');


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] alter column witdh

2003-05-27 Thread Frank Bax
At 10:38 AM 5/27/03, Eric Anderson Vianet SAO wrote:
>how can I alter the column width
http://techdocs.postgresql.org/techdocs/updatingcolumns.php

fbax=# create table tablename (columnname text);
CREATE
fbax=# \d tablename
  Table "tablename"
 Attribute  | Type | Modifier
+--+--
 columnname | text |
fbax=# \q

pg_dump -o -c -s -t tablename fbax | sed "s:\"columnname\" 
text:\"columnname\" varchar(2):" | psql

Frank 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org


Re: [SQL] Creating Views with Column Names based on Distinct

2003-06-09 Thread Frank Bax
At 10:59 AM 6/6/03, Damien Dougan wrote:
I was wondering if it is possible to create a table view based on a table
which is effectively an "attribute list".
For example, suppose I have two tables:

CREATE TABLE user
(
  userid integer,
  username character varying,
  userpassword character varying,
  startdate date
);
CREATE TABLE userdetail
(
  userid integer,
  attributename character varying,
  attributevalue character varying
);
Now I want to make a public view of the user, which would have all of the
defined fields in user, and all of the defined attributes across userdetail.


I'll think you'll find what you're looking for if you search the archives 
of this mailing list for 'crosstab'.  

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Functional Indexes

2003-07-15 Thread Frank Bax
The two functions do not group data the same way, so a common index is not 
possible.
urlhost would put pgsql.org and pgsql.com close together.
urltld would but pgsql.com and xyz.com close together.

Frank

At 01:36 PM 7/15/03, David Olbersen wrote:

Hello all,

I have a function (urlhost) which finds the 'host' portion of a URL. In 
the case of http://www.foobar.com/really/long/path/to/a/file it returns 
"www.foobar.com".

I also have a function (urltld) which returns the TLD of a URL. In the 
case of http://www.foobar.com/really/long/path/to/a/file it returns ".com" 
(the leading dot is OK).

urltld uses urlhost to do it's job (how should be apparent).

Now the question: is there a single index I can create that will be used 
when my  WHERE clause contains either urlhost or urltld? I could create 
two functional indexes, but that seems a bit silly to me.

Any thoughts?

--
David Olbersen
iGuard Engineer
St. Bernard Software
11415 West Bernardo Court
San Diego, CA 92127
1-858-676-2277 x2152
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] strange "order by" request

2003-07-23 Thread Frank Bax
Two tables - employee and timesheet simple enough.  Each table is more 
complex than example before.  timesheet contains multiple rows per day 
because the "other" fields are different for various rows on same 
day/employee. My selection criteria actually uses some of these other 
fields.  lo_shift and hi_shift are always on the same date for a given row.

In the output, I *must* have all rows for an employee together, the 
sequence of employees should be based on lowest value of "lo_shift" for 
each employee.  How do I code such a select?

Here are some commands to create tables & sample data:
CREATE TABLE "employee" (
"emp" character(6) NOT NULL,
"last" character varying(64),
"first" character varying(64)
);
CREATE TABLE "timesheet" (
"emp" character(6) NOT NULL,
"dept" character(2) NOT NULL,
"lo_shift" timestamp,
"hi_shift" timestamp
);
insert into employee(emp,last,first) values('091006','Clarke','Cynthia');
insert into employee(emp,last,first) values('096005','Mullins','Denise');
insert into employee(emp,last,first) values('089068','Johnson','Michelle');
insert into employee(emp,last,first) values('098036','Zandstra','Nicole');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('091006','10','2003-07-17 00:00','2003-07-17 07:59');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('091006','10','2003-07-17 08:00','2003-07-17 11:59');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('096005','10','2003-07-17 07:30','2003-07-17 08:59');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('096005','10','2003-07-17 09:00','2003-07-17 17:59');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('098036','10','2003-07-17 13:30','2003-07-17 19:29');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('098036','10','2003-07-17 19:30','2003-07-17 21:29');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('089068','10','2003-07-17 14:00','2003-07-17 17:59');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('089068','10','2003-07-17 18:00','2003-07-17 21:59');
insert into timesheet(emp,dept,lo_shift,hi_shift) 
values('32','90','2003-07-18 18:00','2003-07-17 23:59');

SELECT emp.emp, emp.last, emp.first, ts.lo_shift, ts.hi_shift
  FROM timesheet ts, employee emp WHERE ts.emp = emp.emp
   AND ts.dept='10' AND ts.lo_shift::date = '2003-07-17'
 ORDER BY emp.first, emp.last, emp.emp, ts.lo_shift, ts.hi_shift;
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[SQL] conditional query?

2003-10-31 Thread Frank Morton
I have a table called a "profile" that has company addresses as well
as individual contact information. Simpifying:
id (the primary key)
parentId (any profile can have a parent ie, contact parent is a company)
address (for this example, will just have one piece of the address)
useParentAddress
If "useParentAddress"=="Y", that means that the parent address of this
person should really be used for mailings. If == "N" then the address
with that profile is the right one to use.
Is there any way to do a single select to get a single "address" back
that is the right one depending on the value of "useParentAddress"
field?
Also want to make this sql as portable as possible.

Will appreciate any ideas.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] flat file database to postgres

2004-03-16 Thread Frank Finner
On Tue, 16 Mar 2004 12:33:57 - (GMT) [EMAIL PROTECTED] sat down, thought long and 
then wrote:

> 
> Hi
> 
> Does anybody know how to export data from a flatfile (using perl) database
> to postgres?
> 
> Thanks

If it is some kind of CSV (text with separators) you could use DBD::CSV for reading 
and DBD::Pg for
writing. Or you read the file with Perl´s standard functions, treat the data to be
PostgreSQL-insert-query-compatible (regarding apostrophes for example) and then use 
DBD::Pg (that´s
what I usually do). If you have some kind of DBase files, you can use DBD::XBase.

Regards, Frank.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] working with unix timestamp

2004-03-16 Thread Frank Finner
On Tue, 16 Mar 2004 16:54:18 + Gary Stainburn <[EMAIL PROTECTED]> sat down, thought
long and then wrote:

> Hi folks.
> 
> I've got a last_updated field on my stock records of type timestamp.
> 
> This last_updated field I get using the perl code:
> 
> my $timestamp=(stat "$localcsv/VehicleStock.$data_suffix")[10];
> 
> How can I insert the  integer timestamp in $timestamp into my table?


I usually use somethinge like the following little function for getting an ISO 
timestamp. The
result is suitable for a PostgreSQL timestamp field (without special timezone).

# Subroutine for ISO-Timestamp
sub mydatetime
  {
my ($time)[EMAIL PROTECTED];
my ($sec,$min,$hou,$mday,$mon,$yea,$wday,$jday,$sz)=localtime($time);
if ($sec < 10) {$sec="0".$sec;}
if ($min < 10) {$min="0".$min;}
if ($hou < 10) {$hou="0".$hou;}
if ($mday < 10) {$mday="0".$mday;}
$mon++;
if ($mon < 10) {$mon="0".$mon;}
    $yea=$yea+1900;
my $t=$yea."-".$mon."-".$mday." ".$hou.":".$min.":".$sec;
return $t;
  }

Regards, Frank.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] isnumeric() function?

2004-04-30 Thread Frank Bax
At 11:29 AM 4/30/04, Yudie wrote:
Great the function works, but what does it means?
SELECT $1 ~ ''^[0-9]+$''
Yudie

The ~ is a pattern matching operator.
^ matches beginning of string
[0-9] matches any numeric digit 0 thru 9.
+ matches one or more occurrences of what came before (digits in 
this case)
$ matches end of string
The ^ and $ are important - if they were left out, the pattern would match 
a string containing both numeric and non-numeric data.

You can change the + to * if you decide that an empty string should be 
considered numeric.

Frank 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Last day of month

2004-06-08 Thread Frank Bax
At 11:30 PM 2/25/04, Joe Conway wrote:
Greg Sabino Mullane wrote:
How to find the last sunday/mon/sat of any given month.
There is probably a smoother way to do it, but here is a
quick little function to do what you ask. Feed it a date
and a number, where 0 is Sunday, 1 is Monday, etc.
oops...forget my last reply...I was a bit too quick on the draw. Try this 
instead:

regression=# select date_trunc('month', current_date + '1 
month'::interval) - '1 day'::interval;
  ?column?
-
 2004-02-29 00:00:00
(1 row)

Joe

But the original request was for a specific day-of-week.  So use Joe's 
answer above to get last day of month, and use 'dow' to determine the 
day-of-week of that day.  Let's call that dow1.  If the day-of-week being 
asked for is dow2 then:
if dow1 < dow2
return (last-day-of-month - dow1 - 7 + dow2)
else
return (last-day-of-month - dow1 + dow2)
I'm no good at coding pgsql functions, so I'm not going to attempt proper 
syntax.

Frank 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Normalising an existing table - how?

2004-06-24 Thread Frank Bax
At 02:51 PM 6/24/04, Graham Leggett wrote:
I have an existing table containing a column in it containing a money 
value. I would like to normalise this column into a separate table, as the 
money value is to be expanded to contain a tax value, etc.

I have been trying to find a SQL query that will do the following:
- Select the money column from the table
- Populate the new normalised table with each row containing the value 
from the original money column
- Write the primary keys of the new rows in the normalised table, back to 
a new column in the original table added for this purpose.

This third step I am struggling with - can anyone suggest a query that 
might achieve the writing back of the primary key to the original table?

Do all three steps in one command:
create table newtable as (select key1, key2, money from oldtable);
Frank  

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [SQL] date_format in postresql

2004-07-21 Thread Frank Bax
Check your version with:
select version();
In the meantime, you could try
date_part('epoch', submittime)
which will return a unix timestamp, you could then use functions within 
your scripting language to convert to human readable date formats.

Frank
At 11:53 PM 7/20/04, azah azah wrote:
Thanks Chris and Rod.
I think I have a problem because not many function existing in
postresql that i installed.
How can i get all the build-in functions because the basic function
to_char is not existing in the database?
On Tue, 20 Jul 2004 22:39:38 -0400, Rod Taylor <[EMAIL PROTECTED]> wrote:
> On Tue, 2004-07-20 at 20:55, azah azah wrote:
> > Hi,
> > I want convert from mysql to postresql,
> > in mysql, query to database using the code as below:
> >
> > date_format(submittime, "%W %M %e, %Y - %r")
>
> to_char(submittime, 'format string')
>
> http://www.postgresql.org/docs/7.4/static/functions-formatting.html
>

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] LIKE '%%' does not return NULL

2004-08-15 Thread Frank Finner
Hi,

you should additionally check with IS NULL:

SELECT * FROM mytable where myfield like '%%' OR myfield IS NULL

if you want to get fields containing NULL as well. A field containing "NULL" in
fact contains nothing, not even an empty string, so you cannot catch it with
"%".

BTW, this holds true especially for booleans: They can contain the values "true"
or "false" or no value at all, which means, they contain "NULL" and are in fact
undefined. You won´t catch them with something like "SELECT * FROM bla where
blubb <> false", you will only get the fields containing true, not the NULLs.

Regards, Frank.


On Wed, 11 Aug 2004 13:22:00 +1200 "Traci Sumpter" <[EMAIL PROTECTED]>
sat down, thought long and then wrote:

> A team developer has chosen the lazy way of not checking if a variable
> exists on his PHP page and has code which produces the following SQL
> 
> SELECT * FROM mytable where myfield ilike '%%'
> 
> I have noticed that this statement does not return null or empty myfield
> records.
> 
> Is this the way (SQL) to do this??
> 
> Is there a better syntax to the SQL??
> 
> Is the better way to create the statement in PHP is to check if the passed
> value <> '' ??
> 
> Or is this issue being fixed in the new V8 version of postgreSQL.
> 
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] 'show databases' in psql way?

2004-11-01 Thread Frank Bax
At 07:13 AM 11/1/04, Erik Wasser wrote:
how can I list the databases in a postgresish way? I know about the '-l'
switch of 'psql' but is there a DBI/SQL-query way? I don't want to call
an external program only to list the databases. I've googled about this
problem but I only found the '-l'-way to this this.

The -e switch of psql will display queries behind commands.  So use this 
option, the use 'l' as a command instead of a switch.

Frank
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Cast NULL into Timestamp?

2004-12-11 Thread Frank Bax
At 12:11 AM 12/11/04, Josh Berkus wrote:
Wei,
> insert into table temp (tempname, tempdate)
> select distinct 'tempname', null from some_other_relevant_table;
I don't think you're reporting the error exactly as it happened.  Try cutting
and pasting your actual PSQL session into your e-mail.
Perhaps you are mixing up the column order?

A copy/paste certainly would have been helpful in this case.  There are 
several problems with the above query.
1) the keyword 'table' is not part of insert command.
2) If 'tempname' is a fieldname, it should not have single quotes.
3) NULL doesn't seem to work as expression in select.

If (3) is the real problem here, then either solution proposed by 
Steve/Stephan will work. 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] Prepared query ?

2005-04-14 Thread Frank Bax


At 10:28 AM 4/14/05, Dinesh Pandey wrote:
How can
I 

Write a prepared query and 
Set values at run time and 
Execute that query for different values. 

Depends on what language you are using - try interfaces mailing
list.





[SQL] user connection over tcp

2005-04-18 Thread Frank Habermann
hello!

i have some problems to understand the manual with pg_hba.conf.

i use a tcp connection with phppgadmin to connect to the db. in my pg_hba.conf 
i say for this connection trust! in phppgadmin i need the for my user the right 
password to connect. but the manual says that every password will accepted. why 
isnt that work at me?

i also try password in pg_hba.conf for my tcpconnections. but after this i cant 
connect to the db. but the manual says that users with right password can 
connect. but this doesnt work here.

can somebody explain me whats wrong with me or my postgre?!

thx

frank habermann

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] user connection over tcp

2005-04-18 Thread Frank Habermann
hello again.

ok. i can connect over tcp!!! this works! my problem is that i only can connect 
with trust! if i use password i cant connect. with trustmode i need the right 
password for the user!

frank

-- Original-Nachricht --
Von: "Frank Habermann" <[EMAIL PROTECTED]>
An: "pgsql-sql@postgresql.org" 
Datum: Montag 18 April 2005 15:31:10
Betreff: user connection over tcp

> hello!
> 
> i have some problems to understand the manual with pg_hba.conf.
> 
> i use a tcp connection with phppgadmin to connect to the db. in my 
> pg_hba.conf i say for this connection trust! in phppgadmin i need the for my 
> user the right password to connect. but the manual says that every password 
> will accepted. why isnt that work at me?
> 
> i also try password in pg_hba.conf for my tcpconnections. but after this i 
> cant connect to the db. but the manual says that users with right password 
> can connect. but this doesnt work here.
> 
> can somebody explain me whats wrong with me or my postgre?!
> 
> thx
> 
> frank habermann
> 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] weird SQL statement question

2005-04-23 Thread Frank Bax
At 01:55 PM 4/23/05, Tadej Kanizar wrote:
Ok, so I installed the latest version of Postresql (8.02) on a WinXP SP2 
machine..

My question is why won't the statement SELECT * FROM table work, whereas 
the statement SELECT * FROM »table« works!

And, to explain a bit more, here's another example:
INSERT INTO TestTable (Name, Surname) VALUES ('name', 'surname');  .. this 
DOESN'T work

INSERT INTO »TestTable« (»Name«, »Surname«) VALUES ('name', 'surname'); .. 
this DOES work

If you create a table using quotes and mixed-case, you will need quotes 
every time you reference the table.  Don't use quotes when you create the 
table and you won't need them when you acces the table with other commands.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Function or Field?

2005-05-03 Thread Frank Bax
At 07:58 AM 5/3/05, [EMAIL PROTECTED] wrote:
Do I have to create another table to put this data???
But, Isn't it redundancy? :-/
The question is: For example:
 I have a "clients" table and I have a "taxes" table that is a chield of 
client.
 Is more efficient put fields into client table that contains:
  -) the count for paid taxes
  -) the count for unpaid taxes
  -) the count for all taxes
  -) the last tax expiration date
 Or is more efficient construct a function that will count this field 
runtime,
as a view for example, or a simple function.
  -) SELECT count(*) from taxes where client=$1 and not nullvalue(dt_pay);
  -) SELECT count(*) from taxes where client=$1 and nullvalue(dt_pay);
  -) SELECT count(*) from taxes where client=$1;
  -) SELECT dt_expiration from taxes where client=$1 order by 
dt_expiration desc
limit 1;

While having few records in "taxes" table, the function (runtime) work 
right and
in good time, but when the "taxes" table grows I think the function will 
run so
slow...
What is correct???
Construct a Function to count runtime? or Create a Trigger to update the
"clients" fields before all action and use those fields in select???

Placing the count fields in client table is redundant and 
expensive.  Creating a function with four selects in it could be slow, but 
you can obtain those four data items in a single select:

SELECT
CASE WHEN dt_pay IS NULL THEN 0 ELSE count(*) END AS CountPaidTaxes,
CASE WHEN dt_pay IS NULL THEN count(*) ELSE 0 END AS CountUnPaidTaxes,
COUNT(*) AS CountTaxes,
MAX(dt_expiration) AS LastExpiry
FROM taxes WHERE client = $1;
With an index on client, this should always be quite speedy.  Using "order 
by dt_expiration desc
limit 1;" is a nice trick, but not useful in this case because all rows for 
one client are being retrieved anyway for the other three data items.

Frank  

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Possible to use a table to tell what table to select from?

2005-07-13 Thread Frank Hagstrom
Hello

I've been thinking on a potential problem I might get in a distant
future, but once I started thinking on it I just as well had to
check...

Is it possible to have say ~ 6 tables, where the first one is a
'reference' to all the others (they all are with the same structure).
The other 5 or so have information stored in importance/access/age,
where one table will be only recent/important things, the other with a
bit lower priority and so on until the last table that can be really
huge and slow and used more like a storage ... Is it possible to write
an SQL query to let the first table figure out what table one will do
the actual select from?

like the pseudo code:
SELECT name,content FROM  (SELECT tablename FROM table1
WHERE item_id="123456")

Or is this a totally pointless thing, as the first table would need to
be so large either way? (or only good for storing large blobs and
such?)

/Frank H

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Possible to use a table to tell what table to select from?

2005-07-14 Thread Frank Hagstrom
On 7/14/05, Richard Huxton  wrote:
> Frank Hagstrom wrote:
> > Hello
> >
> > I've been thinking on a potential problem I might get in a distant
> > future, but once I started thinking on it I just as well had to
> > check...
> >
> > Is it possible to have say ~ 6 tables, where the first one is a
> > 'reference' to all the others (they all are with the same structure).
> > The other 5 or so have information stored in importance/access/age,
> > where one table will be only recent/important things, the other with a
> > bit lower priority and so on until the last table that can be really
> > huge and slow and used more like a storage ... Is it possible to write
> > an SQL query to let the first table figure out what table one will do
> > the actual select from?
> 
> If the tables are all holding the same sort of thing, e.g. logs2003,
> logs2004, logs2005 etc. then you might want to look at partitioning.
> This is being looked at for the next version, check the archives of the
> hackers list.
> 
> --
>Richard Huxton
>Archonet Ltd
> 

Ah ok, thanks I'll look into the partitioning then :)

(not really logs, but will work in the same way, as I'm planning to
use it to hold revisions, but it will have the same format in the
tables...)

/Frank H

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] Using subselects as joins in POstgeSQL (possible?, examples)

2005-07-22 Thread frank church


Is it possible to use subselects as joins in PostgreSQL.

eg.

select a.a, a.b, a.c, b.a, b.b. b.c from (first subsselect) a (second subselect)
b  where (in table criteria) and a.a = b.a

or

select a.a, a.b, a.c, b.a, b.b. b.c from (first subsselect) a join (second
subselect) b on a.a = b.a where (in table criteria)

I have a feeling it is possible but I need the right syntax

//Frank




This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Caracter é

2005-08-06 Thread Frank Finner
Hi,

what encoding do your webpages have, and what encoding does your database have? 
If both have the same encoding (eg both UTF8 or both Latin1), all should go 
well, otherwise you will have to adapt encodings before putting form data into 
the database (and vice versa), eg with utf_encode / utf_decode.

We had a similiar problem, since we encode both, webpages and database, as 
utf8, there is no longer such a problem. We use php 4.4.x, postgresql 8.0.3 and 
pear.

Regards, Frank.


On Sat, 06 Aug 2005 22:58:40 +0200 Nicolas Cornu <[EMAIL PROTECTED]> thought 
long, then sat down and wrote:

> 
> Hello,
> 
> I compiled Postgresql 8 on my server and installed it
> I am trying to insert a string using php and PEAR::DB
> 
> The string contain: é and this cause
> $db-query to fail with an Unkown error as error message.
> 
> If i insert the data using pgAdminIII the string is inserted into the 
> table.
> 
> I don't think it is Postgres related, do i need to ask to PEAR::DB team ?
> 
> Thanks in adavance,
> 
> CN
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly


-- 
Frank Finner

Invenius - Lösungen mit Linux
Köpfchenstraße 36
57072 Siegen
Telefon: 0271 231 8606Mail: [EMAIL PROTECTED]
Telefax: 0271 231 8608Web:  http://www.invenius.de
Key fingerprint = 90DF FF40 582E 6D6B BADF  6E6A A74E 67E4 E788 2651



pgpp7ceEQc1si.pgp
Description: PGP signature


Re: [despammed] [SQL] converting varchar to integer

2005-08-17 Thread Frank Bax

At 05:30 AM 8/17/05, Kretschmer Andreas wrote:


[EMAIL PROTECTED] <[EMAIL PROTECTED]> schrieb:

> Hi,
>
>I have a varchar column, and I need to
>
>1) check the value in it is an integer
>2) get the integer value (as integer)

test=# update foo set n = substring(t , '[0-9]')::int;



I think you meant:
update foo set n = substring(t , '[0-9]+')::int;


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Selecting records not present in related tables

2005-10-06 Thread Frank Bax

At 03:43 PM 10/6/05, Hector Rosas wrote:

Hello, I'm trying to select records in a table not present in a related 
table, in example, I've a table with message information (subject, 
message, date, etc) and another (usermessages) with where user(s) has that 
message, its state, etc. Records in this table will be deleted in a 
certain time (just some extra info).
I want to select messages records that aren't present in the other table 
(usermessages), I got the next two queries, maybe someone can suggest a 
better one.


SELECT m.id FROM messages AS m
WHERE (SELECT count(um.*) FROM usermessages AS um WHERE um.idmessage=m.id )=0;

SELECT m.id FROM messages AS m where id NOT IN (select 
um.idmessage FROM usermessages um);



select m.id from messages as m left join usermessages as um on 
m.id=um.idmessage where um.idmessage is null;




---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] SEVEN cross joins?!?!?

2005-10-11 Thread Frank Bax
I have a table with only 434 rows in it.  Two important columns are 
"itemid" and "locn".  Each item must be in one of seven locations.  We need 
to create a "combo" by selecting one item from each of seven locations; 
then determine which "combo" is the "best" according to our analysis (see 
below).


A subselect for items in a location looks something like:
(select * from suit_item where locn='Head' AND username='Walter' ORDER BY 
itemid LIMIT 10) as Head


One subselect for each location, cross join them all and the query 
generates 10,000,000 combinations!  Without the "LIMIT 10",  there are  78 
* 37 * 91 * 81 * 99 * 47 * 1 = 98,981,901,018 results returned for 
username='Walter' (the only user at the moment).  The large volume is 
causing a problem for my systems!  The "ORDER BY itemid" was added only so 
that same 10 items were processed on different computer systems I tested 
this query on.  Only one item for 7th locn in the database at the moment.


Every item has three key properties val1, val2 and val3.  For each combo, 
we calculate:

(Head.val1 + Arm.val1 + ... Leg.val1) AS Calc1
(Head.val2 + Arm.val2 + ... Leg.val2) AS Calc2
(Head.val3 + Arm.val3 + ... Leg.val3) AS Calc3
Each calculation has a pseudo "max" value coded so that values above this 
"max" are considered equal:

CASE WHEN calc1 > 70 then 70 else calc1 END as ordcalc1
CASE WHEN calc2 > 15 then 15 else calc2 END as ordcalc2
CASE WHEN calc3 > 60 then 60 else calc3 END as ordcalc3
Then I use:
ORDER BY ordcalc1 DESC, ordcalc2 DESC, ordcalc3 DESC

When I activated a couple of my brain cells, I realised that adding "WHERE 
ordcalc1 >= 70 AND ordcalc2 >= 15 AND ordcalc3 >= 60" after the cross joins 
might help things out a bit.  The 10,000,000 results was reduced 
significantly (8K - 30K with different samples).  Because the "ordcalc" 
cannot be used in a WHERE clause, the entire expression was repeated.


I used php to generate the query from pieces so that I could avoid lots of 
repetition in coding (but still there in final query).  The query itself is 
about 6K when assembled.


After that big introduction, I have a couple of questions:

1) Did I approach the problem incorrectly?  Is there another way to 
approach this query so that fewer combos are analysed?


2) Are there any optimisations that could improve query speed?  Since the 
table is so small, I guessed that indexes wouldn't help.  I created an 
index on (username, itemid), but it doesn't get used.  Output of EXPLAIN 
ANALYSE found here:

http://www.execulink.com/~fbax/JOINS/

3) When run on P2 and P4 systems, I would expect to see huge improvement in 
time taken to process query, but I don't (only 35-40% better)?


i = number of items in LIMIT of subselect
rc = raw record count
rcw = record count with "limits" in WHERE clause
p2 = seconds for query to run on P2-400M pg=7.4.3 ram=32M
p4 = seconds for query to run on P4-2.8G pg=7.3.5 ram=1G

i=10 - rc=1,000,000 rcw=27,086 p2=81  p4=49
i=11 - rc=1,771,561 rcw=41,121 p2=141 p4=86
i=12 - rc=2,985,984 rcw=56,425 p2=216 p4=142
i=13 - rc=4,826,809 rcw=81,527 p2=??? p4=228

On P2 system i=13 query returns empty page with no errors on server.

On P4 system i=15 results in:
PostgreSQL Error: 1 (ERROR: tuplestore: write failed)

I suppose this is a temp file - is it created in $DATA?  OpenBSD has 
several partitions, so I'll need to know which one is too small. 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] SEVEN cross joins?!?!?

2005-10-11 Thread Frank Bax

At 08:29 AM 10/11/05, Richard Huxton wrote:


Frank Bax wrote:
I have a table with only 434 rows in it.  Two important columns are 
"itemid" and "locn".  Each item must be in one of seven locations.  We 
need to create a "combo" by selecting one item from each of seven 
locations; then determine which "combo" is the "best" according to our 
analysis (see below).

A subselect for items in a location looks something like:
(select * from suit_item where locn='Head' AND username='Walter' ORDER BY 
itemid LIMIT 10) as Head
One subselect for each location, cross join them all and the query 
generates 10,000,000 combinations!  Without the "LIMIT 10",  there are
78 * 37 * 91 * 81 * 99 * 47 * 1 = 98,981,901,018 results returned for 
username='Walter' (the only user at the moment).  The large volume is 
causing a problem for my systems!  The "ORDER BY itemid" was added only 
so that same 10 items were processed on different computer systems I 
tested this query on.  Only one item for 7th locn in the database at the 
moment.


Frank - it might just be me, but I've read your email twice and despite 
all the information I still don't have any idea what you are trying to do.


Are you saying that you WANT to generate a cross-join, score the millions 
of results and then pick the best 10? It's doing what you want, but you'd 
like it to be faster.


Or are you saying that you'd like to avoid the explosion in rows altogether?

In either case - I don't suppose you could provide a real example of the 
query, so we can see exactly what you're trying to do.



There is no "best 10".  I currently limit each subselect to 10 items so 
that query will actually run without crashing.  I would like to remove the 
"ORDER BY itemid LIMIT 10" mentioned above.  At the end of the query I have 
a "LIMIT 100" clause which will stay and produces a list of "best 100" combos.


Either of your solutions would be acceptable; since avoiding the 
"explosion" would also make the query faster.  Current calculations 
indicate that running the query without "LIMIT 10" in subselect would take 
years to process.


The query is filled with expressions.  I'm not sure I can shorten it 
without making typos or deleting something important, so I'll make it 
available on web here:

http://www.execulink.com/~fbax/JOINS/
Results of "explain analyse" is also there. 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] SEVEN cross joins?!?!?

2005-10-12 Thread Frank Bax

At 09:00 AM 10/12/05, Daryl Richter wrote:


Richard Huxton wrote:

Frank Bax wrote:

Are you saying that you WANT to generate a cross-join, score the 
millions of results and then pick the best 10? It's doing what you 
want, but you'd like it to be faster.


Or are you saying that you'd like to avoid the explosion in rows 
altogether?


In either case - I don't suppose you could provide a real example of 
the query, so we can see exactly what you're trying to do.




There is no "best 10".  I currently limit each subselect to 10 items so 
that query will actually run without crashing.  I would like to remove 
the "ORDER BY itemid LIMIT 10" mentioned above.  At the end of the query 
I have a "LIMIT 100" clause which will stay and produces a list of "best 
100" combos.


Either of your solutions would be acceptable; since avoiding the 
"explosion" would also make the query faster.  Current calculations 
indicate that running the query without "LIMIT 10" in subselect would 
take years to process.


OK - so at the heart of the problem is the fact that you want to search a 
space with 100 billion possible states. There are three main options
1. Brute force and patience - simple and is guaranteed to produce the 
"best" answers. You can use cursors/multiple queries to manage the 
workload. The disadvantage is that it's probably slower than you'd like.
2. Smarter algorithms - possibly something genetic to work towards local 
maxima. Or, if you'd like to keep it simple, just split your 7 locations 
into 2,2,3 and solve for each separately.
3. Statistical - examine a subset of possible states and accept you'll 
only be able to say "almost best" to 99% confidence or similar.
I'd be tempted by #2 - there are probably some combinations you can rule 
out, which combined with a split/recombine should reduce the number of 
states to query.


I'll second this recommendation.  The OP is trying to drive a nail with a 
screwdriver.  He needs a program, not a query.



Richard, you've summed it up nicely.

Splitting locations into subsets (like 2,2,3) doesn't work because it is 
possible that low values in one location can be offset by high values in 
another location, and still result in an excellent combo.


The good news is these suggestions got me thinking outside the box.  I 
think I can program a modified brute-force that bypasses large numbers of 
combos early.  It might still be too large/slow, so I'd be interested in 
finding more info about these "smarter algorithms" in option 2.  Where do I 
look?


Greg: my son's the gamer; I'm just trying to help him out.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] SEVEN cross joins?!?!?

2005-10-19 Thread Frank Bax

At 09:04 AM 10/13/05, Daryl Richter wrote:


Frank Bax wrote:

[snip]


Richard, you've summed it up nicely.
Splitting locations into subsets (like 2,2,3) doesn't work because it is 
possible that low values in one location can be offset by high values in 
another location, and still result in an excellent combo.
The good news is these suggestions got me thinking outside the box.  I 
think I can program a modified brute-force that bypasses large numbers of 
combos early.  It might still be too large/slow, so I'd be interested in 
finding more info about these "smarter algorithms" in option 2.

Where do I look?


If you're mathematically inclined, I would first look at using
Lagrangian Relexation, it may be appropriate for your problem:

http://www.2112fx.com/lagrange.html



Thanks, but that's a little too complex for me to turn into code!

I did rewrite my code from a simple cross join SQL in PHP to custom 
searching in perl.  I sucked subselects into arrays and then looked at all 
possible combinations.


For those that forgot/missed the background, my table has 514 rows.  Using 
subselect, this table is split into 7 subtables.  These seven subtables are 
cross joined with each other to produce 770 billion rows that need to be 
searched (to assemble a 'made-to-order' suit of armour).


By using more intelligent code (and not simple brute-force), I was able to 
analyse a complete set of 770 billion states in just under 70 hours on a 
P4-2.8Ghz system, which is fast enough for today.  A faster cpu will help, 
since process does no i/o except at beginning and end of script. I realised 
that if I am ever able to figure out coding for multi-processor or systems 
(even remote like [EMAIL PROTECTED]), I can exploit either/both of these for this 
problem by slitting problem on items in first subtable into 50-60 subtasks, 
then merging results from each of those subtasks.  This might become a 
requirement if the underlying table grows to be quite large.


Thanks for pointing me in the right direction, it's been an interesting week.

Frank 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] combining records from a single table and presenting

2005-10-27 Thread Frank Bax

At 02:00 PM 10/27/05, Abhishek wrote:

I have a table "TABLE1"  which has

Callguid  |   digits |  type

   123   'a'
   345   'b'
   678   'c'

type can have only 'a', 'b' or 'c' as its value.
I am tryng to write a query which returns me a record like this

---
CallGuid  |   a type digits   | b type digits | c 
type digits

---
123 345 
   678



1) You didn't say if 'a','b','c' records always exist for every callguid.
2) You didn't say if there is more than one record for a given callguid/type.

If (1) is 'yes' and (2) is 'no'
select a.callguid, a.digits as a_digits, b.digits as b_digits, c.digits as 
c_digits

from
(select callguid,digits from table1 where type='a') as a
join
(select callguid,digits from table1 where type='b') as b on 
a.callguid=b.callguid

join
(select callguid,digits from table1 where type='c') as c on 
a.callguid=c.callguid;


If (1) is 'no' and (2) is 'no'

select coalesce(a.callguid,b.callguid,c.callguid) as callguid,
a.digits as a_digits, b.digits as b_digits, c.digits as c_digits
from
(select callguid,digits from table1 where type='a') as a
full outer join
(select callguid,digits from table1 where type='b') as b on 
a.callguid=b.callguid

full outer join
(select callguid,digits from table1 where type='c') as c on 
a.callguid=c.callguid;


If (2) is 'yes', you're on your own. You can also try searching for 
"crosstab" and/or "pivot table" for more info. 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] Extract table columns in tabular form

2005-11-05 Thread frank church

How do you extract postgresql column names, types and comments in tabular form,
using an SQL command?. I know they are stored in one of the system tables, but
I don't know which.


R Church


This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] cli in sql?

2005-11-11 Thread Frank Bax
If my database has column containing a filename, can I use sql to present 
this filename and datemodified (as output from 'ls -l' or from mtime() 
fuction) or *must* it be done after the query in interface such as php or perl?



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] cli in sql?

2005-11-11 Thread Frank Bax

At 03:03 PM 11/11/05, Greg Sabino Mullane wrote:

At 08:57 AM 11/11/05, Frank Bax wrote:
> If my database has column containing a filename, can I use sql to present
> this filename and datemodified (as output from 'ls -l' or from mtime()
> fuction) or *must* it be done after the query in interface such as php 
or perl?


Neither. You can do it inside the db with a "pl" language such as plperlu:

CREATE OR REPLACE FUNCTION filemodtime(TEXT) RETURNS TEXT LANGUAGE plperlu AS
$$
my $filename = shift;
-e $filename or elog(ERROR, qq{The file "$filename" does not exist\n});
return localtime($^T - (60*60*24* -M _));
$$;

SELECT filemodtime('/var/log/messages');
SELECT filemodtime('/dark/matter');



This looks interesting!  But I'm not sure how to use it?
wbax=> select version();
   version
-
 PostgreSQL 7.4.3 on i386-unknown-openbsd3.6, compiled by GCC 2.95.3
(1 row)

wbax=> CREATE OR REPLACE FUNCTION filemodtime(TEXT) RETURNS TEXT LANGUAGE 
plperlu AS

wbax-> $$
wbax-> my $filename = shift;
ERROR:  syntax error at or near "$" at character 80
wbax=> -e $filename or elog(ERROR, qq{The file "$filename" does not exist\n});
Invalid command \n});. Try \? for help.
wbax(> return localtime($^T - (60*60*24* -M _));
wbax(> $$;
wbax(>


My system does have
/usr/local/lib/postgresql/plperl.so

And I tried changing plperlu to plperl, but get the same error.  I found:
http://www.netcraft.com.au/geoffrey/postgresql/plperl.html
to add the language to my system, but that didn't help either.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] How to change database owner in PostgreSQL 7.4?

2005-11-26 Thread frank church
Hi guys,

What is the command change database owner in PostgreSQL 7.4?




This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Anual Query

2005-11-28 Thread Frank Bax

At 11:20 AM 11/28/05, Mauricio Fernandez A. wrote:

Can any of you help me with this query?,  I need to retrieve the
sum(units) and sum(cost) for each client in every month in the same
row, something like this:

client|UnJan|CostJan|UnFeb|CostFeb|UnMar|CostMar ...|UnDec|CostDec
- 
100058|580  |47384  |746  |87438  |536  |21653  ... |857 
|754875 (wrong

data, just to show)

Now I get this result using several querys (one for month and worse
for client too) but this, of course its too expensive.

Here is a table sample.

 orderId  | client  | date| product | Un | cost
- --
 1001299  | 100058  | 20050103  | 759936  | 1  | 2375
 1001299  | 100058  | 20050103  | 759928  | 1  | 2375
 1001300  | 100078  | 20050103  | 759936  | 1  | 2375
 1001301  | 100017  | 20050103  | 759928  | 1  | 2375
 1001302  | 19  | 20050103  | 759936  | 2  | 4750
 1001302  | 19  | 20050203  | 756304  | 1  | 6669
 1001303  | 19  | 20050203  | 759936  | 2  | 4750
 1001304  | 15  | 20050203  | 759936  | 20  | 47500
 1001306  | 100013  | 20050203  | 759936  | 2  | 4750
 1001306  | 100013  | 20050203  | 759928  | 2  | 4750
 1001307  | 100013  | 20050203  | 759936  | 4  | 9500
 1001308  | 100013  | 20050203  | 759936  | 2  | 4750
 1001309  | 100050  | 20050303  | 759936  | 2  | 4750
 1001310  | 100050  | 20050303  | 759936  | 4  | 9500



Search google and/or archives of this and/or other pgsql lists for 
"crosstab" and/or "pivot table".  I think there might also be something in 
contrib/ that could help. 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] How to change database owner in PostgreSQL 7.4?

2005-11-29 Thread frank church
Quoting Andreas Kretschmer <[EMAIL PROTECTED]>:

> frank church <[EMAIL PROTECTED]> schrieb:
>
> > Hi guys,
> >
> > What is the command change database owner in PostgreSQL 7.4?
>
> ALTER DATABASE foo OWNER TO blob;

I tried that and it didn't work. That command appears to be available only in
8.0. Is there  a way of updating the system tables to change it?


> In general: start psql and type  '\h alter database', and, more general,
> '\h' and '\?'
>
>
> HTH, Andreas
> --
> Really, I'm not out to destroy Microsoft. That will just be a completely
> unintentional side effect. (Linus Torvalds)
> Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>





This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] select count of distinct rows

2005-12-10 Thread Frank Bax

At 07:53 PM 12/10/05, Havasvölgyi Ottó wrote:

I would like to select the count of distinct rows in a table.
SELECT COUNT(DISTINCT *) FROM mytable;
This does not work. How can I do it with Postgres?



select count(*) from (select distinct * from mytable) as x;


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] Does VACUUM reorder tables on clustered indices

2005-12-18 Thread frank church


Does VACUUMing reorder tables on clustered indices or is it only the CLUSTER
command that can do that?


/ r church





This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Help with simple query

2005-12-28 Thread Frank Bax

At 06:58 PM 12/28/05, Collin Peters wrote:

The following query will return me all the latest dates, but I can't
return the note_id or subject with it.
SELECT n.user_id, max(n.modified_date)
FROM notes n
GROUP by n.user_id
ORDER BY n.user_id

Is this simpler than I am making it?



No, it's not "simple".  You need to join the results of above sql back to 
the original table (and relocate the order by clause):


SELECT notes.note_id, notes.user_id, maxx.max_date, notes.note
FROM
(SELECT n.user_id, max(n.modified_date) AS max_date FROM notes n GROUP by 
n.user_id)

AS maxx
JOIN notes on notes.user_id = maxx.user_id AND notes.modified_date = 
maxx.max_date
ORDER BY notes.user_id; 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] Loading lots of data in a SQL command

2006-01-03 Thread frank church

I am load lots of data via SQL into a database and wrapping it into transactions
speeds it up.

However this fails a number of times. The queries results are logged so it is
easy for me to find problem records.

However a single failure causes the whole transaction to fail.

Is there a setting or feature that allows which allows the same performance as
transactions, without causing the whole process to fail, like a delayed updates
or write mechanism of some sort.

It is something I would like to set in that particular data looad.


Frank




This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] how to transform list to table and evaluate an

2006-01-07 Thread Frank Bax
Today's your lucky day (I think), because I was looking for (and used) the 
aggregate function mentioned below just before reading your question.



At 11:03 AM 1/7/06, Tomas Vondra wrote:

1) How to create a "table" in the form

   documtent_id  | word_1 | word_2 | ... | word_n
  -
1345 |   11   |   12   | ... |   2
1202 |1   |0   | ... |  12
  .   ..  .  .
  .   ..  .  .
1129 |   20   |1   | ... |   0

   from the query

   SELECT docId, word, score FROM Index WHERE word IN (word_1,..,word_n)




From section 31.10 of the 8.0 docs came this function...
 CREATE AGGREGATE array_accum (
 sfunc = array_append,
 basetype = anyelement,
 stype = anyarray,
 initcond = '{}'
 );

This query will return one row per docld.  It's not exactly the format you 
asked for, but perhaps it's a start ...


select docld,array_accum(word),array_accum(score) from index where word in 
('apples','orange') group by docld;


Then your could write two functions (beyond my capabilities):
minarray( int[] ) so you could select minarray( array_accum(score) ) /* 
apples AND orange */
maxarray( int[] ) so you could select maxarray( array_accum(score) ) /* 
apples OR orange */





2) How to evaluate the function derived from the 'search string' on this
   table, but this probably will work as an EXECUTE statement or
   something like that.




Not sure what you're asking here


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] REPOST:Memory Allocation error using pg_dump on 7.4

2006-01-27 Thread frank church

I repeatedly get this error whenever I try to backup a database

The command used is:

pg_dump -Fc -O -U username tablename > tablename.20060122

pg_dump: ERROR:  invalid memory alloc request size 4294967290
pg_dump: SQL command to dump the contents of table "cc_ratecard" failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request size
4294967290
pg_dump: The command was: COPY public.tablename(id, ...

Is there a bug somewhere in there?

Is there something which needs doing in regard to my memory allocation settings?

The table involved is one of the larger tables, but at only 400,000 records it
shouldn't be a problem



This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Non Matching Records in Two Tables

2006-02-08 Thread Frank Bax

At 04:10 PM 2/8/06, Ken Hill wrote:
I need some help with a bit of SQL. I have two tables. I want to find 
records in one table that don't match records in another table based on a 
common column in the two tables. Both tables have a column named 'key100'. 
I was trying something like:


SELECT count(*)
FROM table1, table2
WHERE (table1.key100 != table2.key100);

But the query is very slow and I finally just cancel it. Any help is very 
much appreciated.



vacuum analyse table1;

vacuum analyse table2;

select count(*) from table1 full outer join table2 on 
table1.key100=table2.key100 where table1.key100 is null or table2.key100 is 
null;


If this is also slow, post output of "EXPLAIN ANALYSE SELECT " 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Field length ??

2006-04-20 Thread Frank Bax

At 05:16 AM 4/20/06, Markus Schaber wrote:


Hi, Louise,

Louise Catherine wrote:

> Could anyone explain, why the field length must be add by 4 :
> result 1 : 10 + 4 =14
> result 2 : 5 + 4 = 9

I guess that it is because all variable length datatypes (and text types
are such) internally contain a 4 bytes length field.



Except that the original fields were neither variable length datatypes, not 
type text.

create table test(]
satu char(10),
dua char(5)
);



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Finding multiple events of the same kind

2006-06-11 Thread Frank Bax

At 08:53 AM 6/11/06, Leif B. Kristensen wrote:


I've got two tables:

CREATE TABLE events (
event_idINTEGER PRIMARY KEY,
tag_fk  INTEGER NOT NULL REFERENCES tags (tag_id),
place_fkINTEGER NOT NULL REFERENCES places (place_id),
event_date  CHAR(18) NOT NULL DEFAULT '31',
sort_date   DATE NOT NULL DEFAULT '40041024BC',
event_note  TEXT NOT NULL DEFAULT ''
);

CREATE TABLE participants (
person_fk   INTEGER NOT NULL REFERENCES persons (person_id),
event_fkINTEGER NOT NULL REFERENCES events (event_id) ON DELETE
CASCADE,
is_principal BOOLEAN NOT NULL DEFAULT false,
PRIMARY KEY (person_fk, event_fk)
);

The table "participants" is of course a many-to-many relation
between "events" and "persons". My problem is that it's entirely
possible to insert eg. multiple birth events for one person, and I'd
like to be able to spot these.



Something like this should get a list of person_fk values that have more 
than one birth date:


SELECT participants.person_fk, count(participants.person_fk) FROM events, 
participants

   WHERE events.event_id = participants.event_fk
AND events.tag_fk in (2,62,1035)
   GROUP BY participants.person_fk HAVING count(participants.person_fk) > 1


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Displaying first, last, count columns

2006-06-21 Thread Frank Bax

At 10:55 AM 6/21/06, Worky Workerson wrote:


I'm having a bit of a brain freeze and can't seem to come up with
decent SQL for the following problem:

I have a table "t" of the form "time_occurred TIMESTAMP, prog_data
VARCHAR" and would like to create a query that outputs something of
the form "first_seen, last_seen, count, prog_data".

I have the current query which gets the first_seen and last_seen via
subqueries, ala

SELECT t1.time_occurred AS first_seen, t2.time_occurred AS last_seen,
t3.count, t1.prog_data
FROM t AS t1, t AS t2
WHERE t1.prog_data = t2.prog_data
   AND t1.time_occurred IN (SELECT min(time_occurred) FROM t WHERE
prog_data = t1.prog_data)
   AND t2.time_occurred IN (SELECT max(time_occurred) FROM t WHERE
prog_data = t1.prog_data)

but I can't seem to work out how to get the count of all the records
that have.  I figure that this is probably a relatively common idiom
... can anyone suggest ways to go about doing this.  Also, the
performance of this is pretty horrible, but I figure that creating a
column on t.prog_data should speed things up noticably, right?



Is this what you're looking for?

SELECT min(time_occurred) AS first_seen, max(time_occurred) AS last_seen, 
count(*), prog_data from t group by prog_data;


Since this query has no WHERE or HAVING clause, this query will read the 
entire table.  There is nothing you can do to speed it up.  If you have 
enough RAM to hold the entire table (and appropriate setting to utilize 
it), then a second (and subsequent) run of the query will be faster than 
the first, but that's as good as it gets. 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] How to get a result in one row

2006-06-21 Thread Frank Bax

At 11:06 AM 6/21/06, [EMAIL PROTECTED] wrote:

returns:
   id | nick
--+--
22192 | A
22192 | T
(2 rows)

I'd like to get the result in only one row:
   id | nick
--+--
22192 | A,T



This question is in the archives (probably more than once).  The answer is...

Read the online docs about aggregate functions.  There is an example that 
does (almost) exactly what you are asking.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] How to get a result in one row

2006-06-21 Thread Frank Bax

At 02:24 PM 6/21/06, Richard Broersma Jr wrote:


> >I'd like to get the result in only one row:
> >id | nick
> >--+--
> >22192 | A,T
> This question is in the archives (probably more than once).  The answer 
is...>

> Read the online docs about aggregate functions.  There is an example that
> does (almost) exactly what you are asking.

Where you referring to the tread regarding the LTREE contrib module for 
postgresql?

http://archives.postgresql.org/pgsql-general/2006-06/msg00745.php

I know I've seen this done using cursors in PL-PGSQL, but I would be 
interested if there was a

solution with pre-existing aggregates.



I was referring to threads like:
http://archives.postgresql.org/pgsql-sql/2004-10/msg00124.php
and threads on 9.Feb.2006 and 11.Mar.2006, which are on my system, but not 
on the above archive site.  The various threads point to this page:

http://www.postgresql.org/docs/8.1/interactive/xaggr.html
Specifically the "array_accum" function on that page.





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] How to find entries missing in 2nd table?

2006-07-11 Thread Frank Bax

At 10:19 AM 7/11/06, [EMAIL PROTECTED] wrote:

control:

controller_id   pk;

datapack:

controller_id   fk;

I need to get all entries from the table control that are not listed in
datapack.



select controller.controller_id from controller
left join datapack on controller.controller_id = datapack.controller_id
where datapack.controller_id is null;


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] De-duplicating rows

2009-07-17 Thread Frank Bax

Richard Huxton wrote:

Christophe wrote:
Now, since this database has been production since 7.2 days, cruft has 
crept in: in particular, there are duplicate email addresses, some 
with mismatched attributes.  The policy decision by the client is that 
the correct row is the one with the earliest timestamp.


Something like (untested):

CREATE TEMPORARY TABLE earliest_duplicates AS
SELECT
  email AS tgt_email,
  min(create_date) AS tgt_date
FROM mytable
GROUP BY email
HAVING count(*) > 1;

DELETE FROM mytable USING earliest duplicates
WHERE email=tgt_email AND create_date > tgt_date;




If it is possible that two rows exist for the same email/date; then you 
will likely need to deal with these manually.


If you rerun the above SELECT after running the delete you should 
identify these rows.


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to tell if column set on update

2009-07-22 Thread Frank Bax

chester c young wrote:

Le 20/07/09 15:19, chester c young a écrit :

within a trigger need to know if the UPDATE statement

set a column.  the column might be set to the old value
or a different value.

(want to make sure the app is sending all necessary

values)

thanks


If the column to test is known -- e.g. column MyCol --,
NEW.MyCol and
OLD.MyCol -- respectively value of MyCol after UPDATE and
value of MyCol
before UPDATE -- can be compared.


for example,
create table t1( c1 int, c2 int );
insert into t1 values( 1, 2 );

1) update t1 set c1=4 where c1=1;
2) update t1 set c1=4, c2=2 where c1=1;

each update results in the same row, but in the second update c2 was actually 
set.
a trigger on the update - how do we know if c2 has been actually set or not?




You cannot do that.  You can only detect that a value has changed.

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Substring

2009-09-04 Thread Frank Bax

Raj Mathur wrote:

On Saturday 05 Sep 2009, bilal ghayyad wrote:

I have an sql script function that take one text parameter "funct
(text)", what I need to do is the following:

If the parameter name is string and its value was for example "abcd"
then I need to do a query based on ab and then based on the abc, how?

Example:

SELECT * from voipdb where prefix like string

But I need the string to be ab and then to be abc? How I can assign
the string to the first character and then to the first and second?
In other words, how can I can take part of the string to do query on
it?


From your example the following brute-force method should work (not 
tested):


select * from voipdb where prefix like substring(string from 1 for 2) || 
'%' or prefix like substring(string from 1 for 3) || '%';


However, I don't understand why you'd want to search for both 'ab' and 
'abc' in the same query, since the first condition is a superset of the 
second one.




Given that tablename is "voipdb"; I wonder if OP really wants to write a 
query that finds the row where argument to function matches the most 
number of leading characters in "prefix".


If voipdb table contains:  ab, abc, def, defg; then calling function 
with "abc" or "abcd" returns "abc" and calling function with "defh" 
returns "def".


If this is the real problem to be solved; then brute force is one 
solution; but I'm left wondering if a single query might return desired 
result (a single row).


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] extracting from epoch values in pgsql

2009-09-17 Thread Frank Bax

Gavin McCullagh wrote:
SELECT time, to_timestamp(time) AS ts, EXTRACT('months',to_timestamp(time)) 
FROM mdl_log;

ERROR:  syntax error at or near ","
LINE 1: ...t time, to_timestamp(time) AS ts, extract('months',to_times...




Try replacing extract('month',value) with extract('months' from value)



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] extracting from epoch values in pgsql

2009-09-17 Thread Frank Bax

Gavin McCullagh wrote:

On Thu, 17 Sep 2009, Frank Bax wrote:


Gavin McCullagh wrote:
SELECT time, to_timestamp(time) AS ts, 
EXTRACT('months',to_timestamp(time)) FROM mdl_log;

ERROR:  syntax error at or near ","
LINE 1: ...t time, to_timestamp(time) AS ts, extract('months',to_times...

Try replacing extract('month',value) with extract('months' from value)


Makes no difference whether month or months:



Sorry; my typo.  You used a comma in "extract" instead of "from".

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] How do I remove selected words from text field?

2010-06-25 Thread Frank Bax
I'm not quite sure how to ask for the query I want, so let's start with 
data:


create table t1 (i int, val varchar);
insert into t1 values(1,'A B C D');
insert into t1 values(2,'B D E F');
insert into t1 values(3,'G H I J');
create table t2 (q varchar, z varchar);
insert into t2 values('A','vowel');
insert into t2 values('B','consonant');
insert into t2 values('E','vowel');
insert into t2 values('K','consonant');

t1.val will contain "words" separated by blanks.  It might be better if 
each "word" were a separate row in another table; but that's not how the 
legacy database was built.  I understand this can be simulated by:


select i,a[s] from (select i, generate_subscripts(string_to_array(val,' 
'),1) as s, string_to_array(val,' ') as a from t1) foo;


In my "real life" situation, the "words" are not single letters.

I'd like to write a function that removes selected "words" from t1.val 
based on select on t2.

In the above example; let's exclude all vowels, so I end up with:
1 'B C D'
2 'B D F'
3 'G H I J'

For some "words" in val; there may not be a row when joining to t2.q; 
these words must be included in final result.  In the above example; 
there is no row in t2 where q="I"; so it is included in result.


How do I write such a function?  Can it be done with SQL only?

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How do I remove selected words from text field?

2010-06-26 Thread Frank Bax

Osvaldo Kussama wrote:

2010/6/25 Frank Bax :

I'm not quite sure how to ask for the query I want, so let's start with
data:

create table t1 (i int, val varchar);
insert into t1 values(1,'A B C D');
insert into t1 values(2,'B D E F');
insert into t1 values(3,'G H I J');
create table t2 (q varchar, z varchar);
insert into t2 values('A','vowel');
insert into t2 values('B','consonant');
insert into t2 values('E','vowel');
insert into t2 values('K','consonant');

t1.val will contain "words" separated by blanks.  It might be better if each
"word" were a separate row in another table; but that's not how the legacy
database was built.  I understand this can be simulated by:

select i,a[s] from (select i, generate_subscripts(string_to_array(val,'
'),1) as s, string_to_array(val,' ') as a from t1) foo;

In my "real life" situation, the "words" are not single letters.

I'd like to write a function that removes selected "words" from t1.val based
on select on t2.
In the above example; let's exclude all vowels, so I end up with:
1 'B C D'
2 'B D F'
3 'G H I J'

For some "words" in val; there may not be a row when joining to t2.q; these
words must be included in final result.  In the above example; there is no
row in t2 where q="I"; so it is included in result.

How do I write such a function?  Can it be done with SQL only?




Try:

SELECT i, array_to_string(array_agg(foo), ' ') "Val"
FROM (SELECT * FROM (SELECT i, regexp_split_to_table(val, E'\\s+') AS
foo FROM t1) bar
LEFT OUTER JOIN t2 ON (bar.foo = t2.q) WHERE z IS DISTINCT FROM 'vowel') foobar
GROUP BY i;
 i |   Val
---+-
 1 | B C D
 3 | G H I J
 2 | B D F
(3 linhas)

Osvaldo





Excellent!  Thanks!  I've never seen "is distinct from" before.  Looks 
like that was the missing piece to my puzzle.


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How do I remove selected words from text field?

2010-06-26 Thread Frank Bax

Osvaldo Kussama wrote:

2010/6/25 Frank Bax :

I'm not quite sure how to ask for the query I want, so let's start with
data:

create table t1 (i int, val varchar);
insert into t1 values(1,'A B C D');
insert into t1 values(2,'B D E F');
insert into t1 values(3,'G H I J');
create table t2 (q varchar, z varchar);
insert into t2 values('A','vowel');
insert into t2 values('B','consonant');
insert into t2 values('E','vowel');
insert into t2 values('K','consonant');

t1.val will contain "words" separated by blanks.  It might be better if each
"word" were a separate row in another table; but that's not how the legacy
database was built.  I understand this can be simulated by:

select i,a[s] from (select i, generate_subscripts(string_to_array(val,'
'),1) as s, string_to_array(val,' ') as a from t1) foo;

In my "real life" situation, the "words" are not single letters.

I'd like to write a function that removes selected "words" from t1.val based
on select on t2.
In the above example; let's exclude all vowels, so I end up with:
1 'B C D'
2 'B D F'
3 'G H I J'

For some "words" in val; there may not be a row when joining to t2.q; these
words must be included in final result.  In the above example; there is no
row in t2 where q="I"; so it is included in result.

How do I write such a function?  Can it be done with SQL only?




Try:

SELECT i, array_to_string(array_agg(foo), ' ') "Val"
FROM (SELECT * FROM (SELECT i, regexp_split_to_table(val, E'\\s+') AS
foo FROM t1) bar
LEFT OUTER JOIN t2 ON (bar.foo = t2.q) WHERE z IS DISTINCT FROM 'vowel') foobar
GROUP BY i;
 i |   Val
---+-
 1 | B C D
 3 | G H I J
 2 | B D F
(3 linhas)

Osvaldo





Excellent!  Thanks!  I've never seen "is distinct from" before.  Looks
like that was the missing piece to my puzzle.

When I write this as a function...

CREATE OR REPLACE FUNCTION novowel(text) RETURNS text AS $$
  SELECT array_to_string(array_agg(word),' ') FROM
  (SELECT * FROM (SELECT regexp_split_to_table($1, E'\\s+') AS word) bar
  LEFT OUTER JOIN t2 ON (bar.word=t2.q)
  WHERE z IS DISTINCT FROM 'vowel') foo
  GROUP BY word
$$ LANGUAGE SQL

It only works when t1.val is a single word/vowel...

insert into t1 values (4,'E');

select *,novowel(val) from t1;
 i |val|  novowel
---+---+---
 1 | A B C D   | A B C D
 2 | B C D E F | B C D E F
 3 | G H I J K | G H I J K
 4 | E |
(4 rows)



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How do I remove selected words from text field?

2010-07-01 Thread Frank Bax


Create some tables; then add some data:

create table t1 (i int, v varchar);
insert into t1 values(1,'A B C D');
insert into t1 values(2,'B D E F');
insert into t1 values(3,'G H I J');
insert into t1 values(4,'E');
create table t2 (q varchar, z varchar);
insert into t2 values('A','vowel');
insert into t2 values('B','consonant');
insert into t2 values('E','vowel');
insert into t2 values('K','consonant');

I am trying to write a function which will:
* split the argument into "words" (separated by blanks);
* remove words that meet a certain condition in another table
  (in this example 'vowel');
* reassemble "words" into a string;
* return the result
This query does that job (Thanks Osvaldo):

SELECT i, array_to_string(array_agg(word), ' ') "new-v" FROM
(SELECT * FROM (SELECT i, regexp_split_to_table(v, E'\\s+') AS word FROM 
t1) bar

LEFT OUTER JOIN t2 ON (bar.word=t2.q)
WHERE z IS DISTINCT FROM 'vowel') foo
GROUP BY i;

 i |  new-v
---+-
 1 | B C D
 3 | G H I J
 2 | B D F
(3 rows)


When I try to create a function to do the same thing; it only works for 
(4,'E') and not the other tuples.


CREATE OR REPLACE FUNCTION notvowel(text) RETURNS text AS $$
  SELECT array_to_string(array_agg(word),' ') FROM
  (SELECT * FROM (SELECT regexp_split_to_table($1, E'\s+') AS word) bar
  LEFT OUTER JOIN t2 ON (bar.word=t2.q)
  WHERE z IS DISTINCT FROM 'vowel') foo
  GROUP BY word
$$ LANGUAGE SQL;

select *,notvowel(v::text) from t1;
 i |v| notvowel
---+-+--
 1 | A B C D | A B C D
 2 | B D E F | B D E F
 3 | G H I J | G H I J
 4 | E   |
(4 rows)


I wonder if it has something to do with pattern passed to 
regexp_split_to_table() since inside the function, E'\\s+' results in:

 i |v| notvowel
---+-+--
 1 | A B C D | C
 2 | B D E F | B
 3 | G H I J | G
 4 | E   |
(4 rows)


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How do I remove selected words from text field?

2010-07-01 Thread Frank Bax

Osvaldo Kussama wrote:

2010/7/1 Frank Bax :

Create some tables; then add some data:

create table t1 (i int, v varchar);
insert into t1 values(1,'A B C D');
insert into t1 values(2,'B D E F');
insert into t1 values(3,'G H I J');
insert into t1 values(4,'E');
create table t2 (q varchar, z varchar);
insert into t2 values('A','vowel');
insert into t2 values('B','consonant');
insert into t2 values('E','vowel');
insert into t2 values('K','consonant');

I am trying to write a function which will:
* split the argument into "words" (separated by blanks);
* remove words that meet a certain condition in another table
 (in this example 'vowel');
* reassemble "words" into a string;
* return the result
This query does that job (Thanks Osvaldo):

SELECT i, array_to_string(array_agg(word), ' ') "new-v" FROM
(SELECT * FROM (SELECT i, regexp_split_to_table(v, E'\\s+') AS word FROM t1)
bar
LEFT OUTER JOIN t2 ON (bar.word=t2.q)
WHERE z IS DISTINCT FROM 'vowel') foo
GROUP BY i;

 i |  new-v
---+-
 1 | B C D
 3 | G H I J
 2 | B D F
(3 rows)


When I try to create a function to do the same thing; it only works for
(4,'E') and not the other tuples.

CREATE OR REPLACE FUNCTION notvowel(text) RETURNS text AS $$
 SELECT array_to_string(array_agg(word),' ') FROM
 (SELECT * FROM (SELECT regexp_split_to_table($1, E'\s+') AS word) bar


Use E'\\s+' or E'[[:space:]]+' in regexp_split_to_table function.



Both of these produce incorrect results...

  i |v| notvowel
---+-+--
  1 | A B C D | C
  2 | B D E F | B
  3 | G H I J | G
  4 | E   |
 (4 rows)


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] What does PostgreSQL do when time goes backward?

2010-08-04 Thread Frank Bax

John Hasler wrote:

How does PostgreSQL react to time being stepped at bootup?  My Chrony
NTP package might cause it to do so on rare occasions when the hardware
clock is way off.  This would only happen during bootup.



My ntp client changes clock (by small amount) at any time:

Jul 25 05:29:38 bax ntpd[10269]: adjusting local clock by 0.098724s
Jul 25 05:31:43 bax ntpd[10269]: adjusting local clock by 0.038991s
Jul 25 06:13:38 bax ntpd[10269]: adjusting local clock by -0.037131s
Jul 25 15:01:52 bax ntpd[10269]: adjusting local clock by -0.112429s

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] What does PostgreSQL do when time goes backward?

2010-08-04 Thread Frank Bax

John Hasler wrote:

Frank writes:

My ntp client changes clock (by small amount) at any time:
Jul 25 05:29:38 bax ntpd[10269]: adjusting local clock by 0.098724s
Jul 25 05:31:43 bax ntpd[10269]: adjusting local clock by 0.038991s
Jul 25 06:13:38 bax ntpd[10269]: adjusting local clock by -0.037131s
Jul 25 15:01:52 bax ntpd[10269]: adjusting local clock by -0.112429s


Ken writes:

These do seem to be larger values than you might expect from a clock
conditioned with ntpd.  Is it a VM or is there something going on that
would stop or suspend your system?


There is certainly something wrong there.



System is not a VM; it runs 24/7 hosting for a few simple domains.  It 
runs OpenBSD, not Linux.  I believe OpenBSD has it's own ntpd 
implementation.  I read once that the amount of time mentioned is how 
much time clock needs to be changed, not how much the clock will 
actually be changed.  I know nothing about ntpd internals; thanks for 
letting me hijack this thread.  The log messages may be out-of-context, 
since I excluded frequency changes.  Still something wrong?


Jul 25 00:13:25 bax ntpd[10269]: adjusting local clock by -0.038453s
Jul 25 00:42:25 bax ntpd[10269]: adjusting clock frequency by -10.296223 
to -20.774345ppm

Jul 25 02:38:31 bax ntpd[10269]: adjusting local clock by 0.039182s
Jul 25 04:56:29 bax ntpd[10269]: adjusting clock frequency by 3.332808 
to -17.441537ppm

Jul 25 05:29:38 bax ntpd[10269]: adjusting local clock by 0.098724s
Jul 25 05:31:43 bax ntpd[10269]: adjusting local clock by 0.038991s
Jul 25 06:13:38 bax ntpd[10269]: adjusting local clock by -0.037131s
Jul 25 08:26:12 bax ntpd[10269]: adjusting clock frequency by -5.876526 
to -23.318063ppm
Jul 25 11:07:08 bax ntpd[10269]: adjusting clock frequency by 16.235236 
to -7.082827ppm

Jul 25 15:01:52 bax ntpd[10269]: adjusting local clock by -0.112429s
Jul 25 15:38:06 bax ntpd[10269]: adjusting clock frequency by -9.112840 
to -16.195667ppm
Jul 25 20:42:03 bax ntpd[10269]: adjusting clock frequency by 1.838351 
to -14.357316ppm
Jul 26 05:57:41 bax ntpd[10269]: adjusting clock frequency by -0.485347 
to -14.842663ppm
Jul 26 15:48:01 bax ntpd[10269]: adjusting clock frequency by -0.130635 
to -14.973298ppm
Jul 26 23:51:48 bax ntpd[10269]: adjusting clock frequency by -0.632588 
to -15.605885ppm
Jul 27 09:14:44 bax ntpd[10269]: adjusting clock frequency by 0.761208 
to -14.844677ppm

Jul 27 11:33:29 bax ntpd[5857]: adjusting local clock by 0.332560s
Jul 27 11:57:09 bax ntpd[5857]: adjusting clock frequency by -0.180636 
to -15.025316ppm
Jul 27 12:16:16 bax ntpd[5857]: adjusting clock frequency by 0.153771 to 
-14.871545ppm
Jul 27 12:38:36 bax ntpd[5857]: adjusting clock frequency by -0.314905 
to -15.186450ppm
Jul 27 22:30:47 bax ntpd[5857]: adjusting clock frequency by -0.058928 
to -15.245378ppm
Jul 28 22:00:57 bax ntpd[5857]: adjusting clock frequency by 0.508728 to 
-14.750045ppm
Jul 29 07:09:05 bax ntpd[5857]: adjusting clock frequency by 0.056698 to 
-14.693347ppm
Jul 29 21:25:01 bax ntpd[5857]: adjusting clock frequency by -0.233626 
to -14.926974ppm
Jul 30 06:28:33 bax ntpd[5857]: adjusting clock frequency by -0.255362 
to -15.182336ppm
Jul 30 15:45:17 bax ntpd[5857]: adjusting clock frequency by 0.291147 to 
-14.891189ppm
Jul 31 11:57:51 bax ntpd[5857]: adjusting clock frequency by 0.198529 to 
-14.725782ppm
Aug  1 08:47:12 bax ntpd[5857]: adjusting clock frequency by -0.063213 
to -14.806622ppm
Aug  2 12:19:10 bax ntpd[5857]: adjusting clock frequency by 0.096080 to 
-14.759072ppm
Aug  3 01:30:19 bax ntpd[5857]: adjusting clock frequency by -0.353416 
to -15.112489ppm
Aug  3 08:10:34 bax ntpd[5857]: adjusting clock frequency by 0.251130 to 
-14.861359ppm
Aug  3 20:29:11 bax ntpd[5857]: adjusting clock frequency by -0.105335 
to -14.966694ppm

Aug  4 07:22:22 bax ntpd[27189]: adjusting local clock by -0.097095s
Aug  4 07:49:26 bax ntpd[27189]: adjusting clock frequency by -2.970136 
to -17.898726ppm
Aug  4 08:04:04 bax ntpd[27189]: adjusting clock frequency by 2.495076 
to -15.403651ppm


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Question regarding indices

2010-09-14 Thread Frank Bax

Steve wrote:

 Original-Nachricht 

Datum: Sat, 11 Sep 2010 11:08:00 -0400
Von: Lew 
An: pgsql-sql@postgresql.org
Betreff: Re: [SQL] Question regarding indices



On 09/11/2010 08:29 AM, Steve wrote:

I have a small question about the order of values in a query.
Assume I have a table with the following fields:
   uid INT,
   data BIGINT,
   hits INT
And an unique index on (uid, data). I use libpq C API to query
data from the table. The query is something like this:
SELECT uid,data,hits FROM mytable WHERE uid=2
AND data IN (2033,2499,590,19,201,659)

Would the speed of the query be influenced if I would sort the data?

What do you mean by "sort the data"?  Which data?


I mean sorting the values in the brackets. Instead of:
SELECT uid,data,hits FROM mytable WHERE uid=2 AND data IN 
(2033,2499,590,19,201,659)

I would then send this here:
SELECT uid,data,hits FROM mytable WHERE uid=2 AND data IN 
(19,201,590,659,2033,2499)

Off course this is a small dataset but the query usually has thousands of 
elements and not only the above 6 elements.



If there will be thousands; why not create a temp table containing these 
values then join to table - might that be faster?


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


  1   2   >