Re: [GENERAL] Problem merging two rows into same primary key

2005-05-24 Thread Patrik Kudo

Hi and thanks for your reply!

Martijn van Oosterhout wrote:

Now to the problem. We want to merge rows with id = 2 and id = 4 into id
= 1 in the asdf table with the qwert table beeing updated to reflect the
change. The desired result would yeild:



Why doesn't:

update quert set data = 1 where data = 2;
update quert set data = 1 where data = 4;
delete from asdf where id in (2,4);

work?

>
> I thought update cascade only took effect when the primary key changed,
> it updated referencing tables, not the other way round.

Sure it will work, but it's quite a bit of work since there are a LOT of 
tables that need to be updated. We were hoping there was an easier way 
and before we actually took a look at how things work we were hoping 
it'd be possible to somehow take advantage of the "on update cascade" of 
the foreign keys by first droping uniqueness from primary key index. But 
the more I think about it the more impossible it seems. :(


Oh, well... I guess we'll go with the massive update route.

Thanks,
Patrik

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Problem merging two rows into same primary key

2005-05-23 Thread Patrik Kudo

Hi!

I've got a problem I can't seem to find an answer to. The problem is
simplified by this example:

1. We have two tables:

create table asdf (id serial primary key,
data text);
create table qwert (id serial,
data integer references asdf
on delete cascade on update cascade);

2. We populate both tables with the following result:

keytest=# select * from asdf;
id | data
+--
1 | asdf
2 | asd2
3 | asd3
4 | asd4
(4 rows)

keytest=# select * from qwert;
id | data
+--
1 | 2
2 | 4
(2 rows)


Now to the problem. We want to merge rows with id = 2 and id = 4 into id
= 1 in the asdf table with the qwert table beeing updated to reflect the
change. The desired result would yeild:

keytest=# select * from asdf;
id | data
+--
1 | asdf
3 | asd3
(2 rows)

keytest=# select * from qwert;
id | data
+--
1 | 1
2 | 1
(2 rows)


I find no way to do this because the primary/foreign keys that would
make this easy actually makes it impossible. Are there any smart way to
do this or do I need to drop the primary key (hence also drop the
foreign keys since the drop will cascade), update the data manually and
then recreate the constraints? I hope there's an easier way beacuase in
the real scenario we're dealing with nearly 100 tables depending on that
single one with the primary key...

Thanks in advance,
Patrik Kudo

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Case sensitivity issue

2001-09-14 Thread Patrik Kudo

select * from Apples where lower(color) like '%red%';

ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
Känns det oklart? Fråga på!

On 9 Sep 2001, Michael Gay wrote:

> If I am doing a command such as
>
> select * from Apples where color like '%red%';
>
> how do I make it case insensitive?
>
> Thanks.
>
> mike
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] get certain # of recs

2001-09-14 Thread Patrik Kudo

On Thu, 13 Sep 2001, Mike S. Nowostawsky wrote:

> How can one select only a certain number of records in Postgres?
>
> In other SQL langs we can use, say, for the first 20 recs:
>
> select  * from tablename where rownum < 21;

You could use

select * from tablename limit 20

If you need to sort in some way you could use something like

select col1, col2, ... coln from tablename order by col1 limit 20

Regards,
Patrik Kudo


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] Yet another Performance Question

2001-04-18 Thread Patrik Kudo

Hi!

I'm not 100% sure, but I think it would be much faster if you use COPY
instead of INSERT when you read in a lot of data from a file.

Regards,
Patrik Kudo

--
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
Känns det oklart? Fråga på!

On Wed, 18 Apr 2001, Konstantinos Agouros wrote:

> Frankly what matters is the time it takes alltogether. I have a script that
> first does a few million inserts, and than queries on this data... these queries
> don't terminate in a reasonable (<10days) time, if I don't use indices.
>
> So I will add some drop/create index-commands to the script...
>
> Konstantin


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] problem with subselect: NOT IN

2001-04-02 Thread Patrik Kudo

Hi

To start with, I think your queries will be faster if you don't use IN,
but instead used regular joins or EXISTS whenever possible

On Mon, 2 Apr 2001, Kevin L wrote:

> The following works fine: (get all employees who have sold
> something)
>
> SELECT emp_id FROM employee WHERE emp_id IN (SELECT emp_id FROM
> salesorder);

This will probably be faster like this:

SELECT DISTINCT e.emp_id FROM employee e, salesorder s WHERE e.emp_id =
s.emp_id;

Or, probably slower:

SELECT e.emp_id FROM employee e WHERE EXISTS (SELECT 1 FROM salesorder s
 WHERE e.emp_id = s.emp_id)

> However, getting employees who have NOT sold something always
> returns zero rows:
>
> SELECT emp_id FROM employee WHERE emp_id NOT IN (SELECT emp_id
> FROM workorder);

Hmm... That should work, but I noticed that in the first query
you use "salesorder" and in the second you use "workorder". Is that where
the fault is?

You might also want to try the following:

SELECT e.emp_id FROM employee e WHERE NOT EXISTS (SELECT 1 FROM salesorder s
  WHERE e.emp_id = s.emp_id)


Regards,
Patrik Kudo

> Has anyone encountered this before? I know the second query
> should return something because the data is in the table.
>
> thanks!
>
> -Kevin


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Thought on OIDs

2001-03-01 Thread Patrik Kudo

Hi!

A thought just hit me and I got a bit worried... If OIDs are "globaly"
unique and I have a very high data-throughput on my database, i.e. I do a
lot of inserts and deletes, is it then possible to "run out" of OIDs? If
this can occur, will it cause any problems?

Need I worry? =)

Regards,
Patrik Kudo

(I'm sorry if this becomes a repeated post. I tried to mail last night,
but my subscription to the list wasn't registered at that time, so the
mail didn't seem to get through)