[SQL] SQL query --- big question???

2004-03-12 Thread Andrei Bintintan



Hi to all, 
 
I have a big question.
 
Can I write a query/function/agregate function 
(whatever) so that I can return the result of a query in a single line? I 
don't know if this is possible to be made and please if it is not possible 
please say.
 
 
I better give an example:
 
table (id, name)
1 xxx
2 yyy
3 zzz
 
 
Now I want to have a query like: 
 
Select id from table 
 
and the result to be something like: 1,2,3 

and not 1 2 3 in different lines. 
 
I just want to concatenate the results in a single 
result so this query result will have 1 row and 1 column. This looks like an 
agregate function but I don't know how to make this. 
 
I hope you will understand what I want to do. 

 
Please advise. 
 
Thnx.
 
Andy.


Re: [SQL] Exceptions when 0 rows affected.

2004-05-10 Thread Andrei Bintintan
Hi Stef,

Your problem depends on what interface/programming language you're using.
In a Begin End transaction you will have to verify if the returned data is
good for you so then you decide if you run at the end "COMMIT" or
"ROLLBACK".

In php (and where is compatible) I suggest this:

//this is php code:

$error = false;
dbExec("BEGIN");

//I use a switch statement so I can jump out of it with a break at any time
switch (1){
case 1:
...
$sql = "Insert";
if (!dbExec($sql)) {$error = true; break;}//was an error?? we jump out
from switch
...
$sql = "Select ";
if (!dbExec($sql)) {$error = true; break;}
...
$sql = "Update";
if (!dbExec($sql)) {$error = true; break;}
...
}//switch

if ($error) dbExec("ROLBACK"); //an error ocured
else dbExec("COMMIT");//there was no error

Hope this helps.

Best regards,
Andy.

- Original Message -
From: "Stef" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, May 10, 2004 4:08 PM
Subject: [SQL] Exceptions when 0 rows affected.

Hi all,

I've been looking on Google,the archives and documentation,
but cannot find what I'm looking for. Maybe I've read the
answer, but it's still evading me.

I'm working with postgres 7.3.4 using the psql client.

I want to know if it's possible to raise an error in a
transactional BEGIN->END block when e.g. an update
or delete statement affects zero rows.

I want the entire transaction block to rollback when
this happens. I can do it in perl and java , but would
in this instance like to load a file with SQL statements
using psql -f . Is it possible?

Kind Regards
Stefan




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] not really SQL but I need info on BLOBs

2004-05-05 Thread Andrei Bintintan
I suggest to store the images on the disc and write in the db only the path.
I am using the same solution. The only problem is that I have to make the
backups separately(in my case). Also this helps more in accessing the
images, you don't have to access for each image the database, you just have
to get the file path.

Think also at the dump size of the database (with blobs). (if you have 100GB
of photos..the dump will be also that big)

I never tied to put the database on 2 disks, I read somewhere that is
possible but I never tried it. In my situation I don't need this, because
the size of the DB is not quite that big. But for the photos we will use
more discs (we are not using 2 yet) and there is no difference in this case
how many discs you use, because in the DB you store only the path.

I see it this way more simple, maybe other have other opinions.

Best regards,
Andy.

- Original Message -
From: "Theodore Petrosky" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, May 05, 2004 6:10 PM
Subject: [SQL] not really SQL but I need info on BLOBs


> Hi,
>
> I am starting a new project where I need to store a
> lot of tiff images. So the question beckons... which
> is better (not correct)?
> store a blob in the database and let the db manage the
> data or
> store the image in a directory and store the path to
> the data in the db.
>
> What are your opinions? Is one method better than the
> other or are there simply different trade offs?
>
> If I store the tiff on the computer and only store the
> path in the database it would be easier to add more
> disk space to the computer without any changes to the
> db. Also, there would be 'virtually' unlimited storage
> space (only limited by the OS)
>
> However, it looks like it would be easier to create a
> connection to the database to access the blobs.
>
> I would love to hear not just what others have done
> but why they did it?
>
> Thanks for listening...
>
> BTW, I really like the zeroconf (rendezvous) support
> in postgresql.
>
> Ted
>
>
>
>
> __
> Do you Yahoo!?
> Win a $20,000 Career Makeover at Yahoo! HotJobs
> http://hotjobs.sweepstakes.yahoo.com/careermakeover
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
>



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] not really SQL but I need info on BLOBs

2004-05-06 Thread Andrei Bintintan
Now of corse it depends on the application that you are making, where you
put the files.
In my situation it is not so complicated, because we're not working on the
images, so from this point of view they are "readonly". Out clients that are
asking for photos have webaccess to a folder where we put the the photos and
we just give the right link, or they receive the photos in pdf documents.

I have the images on the same server as the database, so this is not a
problem to access them. But also if you have to make a mount on other system
I see no problem to this. Only if you will have to write the image file
back, then you will have to give write access to that file. In our situation
the image files are "web accessed" so there is NO way that we give write
access to that folder(I mean for the clients).

Now honestly the size of the DB makes me to worry at the point when you put
the images in the database. You cannot make simple dumps. I have at the
moment 12 GB of photos and the backup script for these takes quite some
time.

Now there is another risk when you put the photos on the disk... there is
the possibility that you have the link in the DB but you don't have the
photo, or the file is corrupted. This can happen.

It is up to you to decide. I don't know if there are also some performance
losses or not, but from what I can imagine to read an image from DB takes
more time as to read it from a file.

I know a lot of people that are storing the images on disk and not in the
database. If you store the images in the DB the handling becomes more
complicated. You will have always have to read the image and create a
temporary file in order to work with it. In file cases you just copy it. I
think it is more flexible the storage on the disk.

But... search the forums, or hope that somedoby else answers also.

Best regards,
Andy.


- Original Message -
From: "Theodore Petrosky" <[EMAIL PROTECTED]>
To: "Andrei Bintintan" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, May 06, 2004 2:46 PM
Subject: Re: [SQL] not really SQL but I need info on BLOBs


> Thanks for the reply. Are there (in your opinion)
> reasons why you would choose to store the images in
> the db? My feelings are that it comes down to access.
>
> If I store the images in the db, my only access to
> these images is to retrieve the image do some work on
> it and reupload it. If I store the image as a file and
> link to it, I can edit the image to my hearts content
> by mounting the volume and opening it in an image
> editor and saving. there is nothing more to do.
>
> archiving and backups are easier also. my tape system
> talks to my server and backs up the images. I believe
> it would be easier to recover from a problem.
>
> in your solution, how do you retrieve your image? do
> you need to mount the volume where the images live? I
> guess I am asking "If your user/client queries the db
> for an image, they may want to view it? How do you
> send them the data?"
>
> I am working on a project where the IT department is
> pushing really hard to have all the images in the db.
> I don't know what the agenda is. I am hopeful to come
> up with reasons either why this is good or not good.
>
> Thanks,
>
> Ted
>
> --- Andrei Bintintan <[EMAIL PROTECTED]> wrote:
> > I suggest to store the images on the disc and write
> > in the db only the path.
> > I am using the same solution. The only problem is
> > that I have to make the
> > backups separately(in my case). Also this helps more
> > in accessing the
> > images, you don't have to access for each image the
> > database, you just have
> > to get the file path.
> >
> > Think also at the dump size of the database (with
> > blobs). (if you have 100GB
> > of photos..the dump will be also that big)
> >
> > I never tied to put the database on 2 disks, I read
> > somewhere that is
> > possible but I never tried it. In my situation I
> > don't need this, because
> > the size of the DB is not quite that big. But for
> > the photos we will use
> > more discs (we are not using 2 yet) and there is no
> > difference in this case
> > how many discs you use, because in the DB you store
> > only the path.
> >
> > I see it this way more simple, maybe other have
> > other opinions.
> >
> > Best regards,
> > Andy.
> >
> > - Original Message -
> > From: "Theodore Petrosky" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Wednesday, May 05, 2004 6:10 PM
> > Subject: [SQL] not really SQL but I need info on
> > BLOBs
> >
> >
> > > Hi,
> > >

Re: [SQL] start

2004-05-04 Thread Andrei Bintintan
You say that you're running Suse 8.

You can also start the process with that Yast.
Start Yast2 -> System -> Runlevel Editor-> and then you can select the
runlevel when you want that postgre starts. I start it at runlevel 3.
If you installed postgre it manually you have a little work to do, probably
you will have to make your own start script.
In my situation(also manual installation)  I just copied the script which
came with suse and edited it for my configuration... it worked for me.

Good luck!


- Original Message - 
From: "H.J. Sanders" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, May 03, 2004 10:36 PM
Subject: [SQL] start


> 
> Hello list.
> 
> 
> I have difficulties starting the postmaster automatically at boot time
> (everything I tried is done by 'root').
> 
> Can someone give me an example for LINUX (SUSE 8).
> 
> Many thanks.
> 
> 
> Henk Sanders
> 
> 
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Database structure

2004-05-04 Thread Andrei Bintintan
I would go for the second one. I think the size of the table is not a
problem. You will have just to write the right indexes for easy joins.

OBS: " For one assessment, I'll store 60 rows with only two useful integers
in it" ... why? Better make a "lab_test" table where you have the tab tests
and you write in the results(#assessment_nr, labtest_nr, p, d) only those
datas that you have. For example if you have the assesment no. 3000 and you
have only the results for lab_test 10->40 then why to write in the DB also
the lab_test from 40->70(if you don't have it)???
(if I didn't understand this clear, sorry for the observation).


The second option is better if you change one time the lab_test list(have to
think also this option --- if making the database for at least 10 years).
Because in the first solution you will have to add always a new column...
and that is not the "best" option. In the second way you just add a new ID
in the lab_test list and finish. No problems.

If you go for the first option and you have to change something in the
result table... it won't be easy.

The alter table is not so tragical as it seems... use constrains...don't
ever erase from DB.

So... my final answer: the second option.

Best regards,
Andy.

- Original Message -
From: "Alain Reymond" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 30, 2004 6:06 PM
Subject: [SQL] Database structure


> Hello,
>
> I would like an advise on the following problem :
>
> I have a table of patients.
> Each patient can make different biological assessments.
> Each assessment is always decomposed into different laboratory tests.
> A laboratory test is made of a test number and two values coming from
analysers.
>
> The schema is :
> Patients(#patient_nr,name,etc...)
> Assessment(#assessment_nr, #patient_nr, assessment_type, other usefull
values).
> Assessment_types(assessment_type, labtest_nr)
> An assessment is composed of different tests, let's say assessment type 1
is
> composed of lab test 1,2,3,5 and assessment type 2 of lab test number 10
to 70.
>
> I have an assessment with 60 different lab tests (always the same). I have
two ways
> for storing the values :
>
> 1 - a table with 120 columns for the two values.
> results(#assessment_nr, p10,d10, p11,d11, .,p70,d70).
> where 10 to 70 represents the lab test number.
>
> 2 - a table with 60 rows for one assessment :
> results(#assessment_nr, labtest_nr, p, d) where p and d are my two
results.
>
> Here comes my question. Which of the two would you choose?
>
> The firsrt solution has the advantage of returning one single row for one
complete
> assessment. If I have to make statistics, it is easy. But, if I have to
modify the
> composition of an assessment (which occurs very rarely), I shall have to
use an alter
> table instruction. As I have 4 different assessment types, I have to
create five
> different tables, one per assessment type.
>
> The second solution is normalized and more elegant. But I am preoccupied
by the
> size of the table. For one assessment, I'll store 60 rows with only two
useful integers
> in it. And you must add the size of the index. With 25.000 assessments a
year, it
> makes 1.500.000 rows with only 4 columns amoung them 2 only for the
results and 2
> for identification. I would like to store 10 years online, so 15.000.000
rows. What
> about the size of index ?
>
> Any advise ? I thank you in advance.
>
>
> Alain Reymond
>
> (I hope that it is clear enough with my bad English).
>
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
>



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] where to start with this procedure ?

2004-05-20 Thread Andrei Bintintan
Hi,

Use the Offset and Limit in the SQL query.

http://developer.postgresql.org/docs/postgres/queries-limit.html

For example:

SELECT select_list
FROM table_expression
WHERE condition
Let's supose that this query returnes normaly 3 elements.

SELECT select_list
FROM table_expression
WHERE condition
LIMIT 50
OFFSET 1

This query will return 50 elements starting with the 1 elements... so
the elemenst from 1 to 10050.

Hope that helps.

Best regards,
Andy.

- Original Message -
From: "Andreas" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 20, 2004 3:33 AM
Subject: [SQL] where to start with this procedure ?


> Hi,
> I know a bit of SQL but not exactly a lot so I ran into this problem.
>
> I have tables on the server that get joined in a view.
> That's OK.
>
> Now I need just a couple of records say 10-100 of 30 000 which could
> easily be filtered by a integer key.
> As of now I have to pull the whole lot into Access and let it do the
> filtering. That doesn't play nice on our network.
>
> Could someone kick me into the right direction where to learn stuff like:
>
> function grabem(x integer) recordset
> (
>grabem = select * from my_view where key = x
> )
>
>
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faqs/FAQ.html
>



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] UNIQUE columnt depdening on other column???

2004-06-02 Thread Andrei Bintintan



Hi, 

 
I have a 
problem.
 
Let's say I have the 
following table:

CREATE 
TABLE rekl_element(id 
serial PRIMARY KEY,active boolean NOT NULL DEFAULT 'y',num int4 NOT NULL,text 
varchar(10));
Now I want that 
"num" column is "unique" but only for those columns that have active='y'. For 
the columns that have active='f' I don't care if num is unique or not. I'm 
asking this because num will be doubled some times. 
 
How can I write a 
constraint, trigger function... etc to check this?
Thanks in advance. 

Andy.


[SQL] Reference with condition on other table column?

2004-06-03 Thread Andrei Bintintan



Hi to all, 

 
I have the following 
tables:

CREATE TABLE t1(id serial PRIMARY KEY,active boolean NOT NULL DEFAULT 'y',num int4 NOT NULL,);CREATE 
UNIQUE INDEX t1_uniqueidx ON t1(num) WHERE active;
CREATE TABLE t2(id serial PRIMARY KEY,active boolean NOT NULL DEFAULT 'y',num int4 NOT 
NULL,);CREATE 
UNIQUE INDEX t2_uniqueidx ON t2(num) WHERE 
active;
CREATE 
TABLE relations(id serial PRIMARY 
KEY,id_t1 int4 NOT NULL REFERENCES 
t1(num),id_t2 int4 NOT NULL 
REFERENCES t2(num));
On tables 
T1 and T2 the "num" columns have unique values for all lines that have 
active='y'(true).How can I write a constraint on Table 
T1 and Table T2 that if the "num" from T1 and "num" from T2 are 
referenced from table "relation" than I cannot update the "active" field to 
"false". My target is that I don't want to have any reference from "relation" 
table to T1 and T2 where in the T1 and T2 the active field is "n"(false) 
or with other words:if a line from T1/T2 is referenced from 
table "relations" than I don't want to be able to put 
active='y'.I hope I was so clear as possible. 

Thnkx in 
advance for helping.
Andy.


Re: [SQL] Last insert id

2004-06-15 Thread Andrei Bintintan
"Is it safe to use "select max(table1_id) from table1" after the insert?"

Yes it is safe, but ONLY if you use it inside a transaction.(BEGIN/COMMIT).

BR.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of mixo
Sent: Wednesday, June 09, 2004 9:24 AM
To: [EMAIL PROTECTED]
Subject: [SQL] Last insert id

I have three tables which are related a serial field, table1_id, in on of
the tables. Updating the tables is done through a transaction. My problem
is, once I have insert  a row in the first tables with table1_id, I need for
the other two tables. How can I get this? Is it safe to use "select
max(table1_id) from table1" after the insert?

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] How to rename a constraint/trigger??

2004-09-06 Thread Andrei Bintintan



Hi to 
all,
 
I have a database, where in the tables I have 
around 100 constrains (link to other tables) that don't have a name 
"" or they have a name like "$1" "$2". Now, I have a module which 
bases on the same structure, but I get some query errors from a 
"" constraint. I really don't know exaclty  which the 
constrain generates the error, in that table I have 12 such constrains. 

 
Does anyone have a script or it exist a 
command so that I can rename all the ""constrains in this 
database? 
 
I'm using postgre 7.3.2.
 
Thankx,
Andy.
 
 


Re: [SQL] sql

2004-10-25 Thread Andrei Bintintan

> 2)if two ore more record have the same value of the maxim value of the
"ora"
> to select oly one of them

At this point you really have to decide which criteria you use for this
deciding.

I strongly suggest to use a uniqe ID for each row in the table(in your case
it is important):
CREATE TABLE "temp50" (
id serial primary key,
  "gc" character varying(36),
  "co" character varying(7),
  "data" date,
  "ora" smallint
 );

Your selection can be made in many ways, but you have to decide the second
criteria.

One more thing:
Your table datas are:
1 >> 5003707G9G9419 22012BB 10-14-2004 82
2 >> 5003707G9G9419 22012BC 10-14-2004 44
3 >> 5003707G9G9419 22022BB 10-14-2004 82
4 >> 5003707G9G9420 22022BC 10-18-2004 49

The rows no 1 and 3 are "redundant" data's(they are the same). Do you really
want this???

Best regards,
Andy.

- Original Message -
From: "cristivoinicaru" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, October 25, 2004 11:13 AM
Subject: [SQL] sql


> I have a postgres table like this:
>
> CREATE TABLE "temp50" (
>  "gc" character varying(36),
>  "co" character varying(7),
>  "data" date,
>  "ora" smallint
>
> );
>
> It contains the following records:
>
> 5003707G9G9419 22012BB 10-14-2004 82
> 5003707G9G9419 22012BC 10-14-2004 44
> 5003707G9G9419 22022BB 10-14-2004 82
> 5003707G9G9420 22022BC 10-18-2004 49
>
> I'd  like the result of the sql interogation to be like this:
>
> 5003707G9G9419 22012BB 10-14-2004 82
> 5003707G9G9420 22022BC 10-18-2004 49
>
>
> Explanations:
> I want like sql interogation to select only one record from each "gc"
group
> record (who has the same code "gc" (that means that the "gc" field will be
> unique key)) with the following two condition:
> 1)from each "gc" group of records to select the record who has the value
of
> "ora" field maxim.
> 2)if two ore more record have the same value of the maxim value of the
"ora"
> to select oly one of them
>
> Thanks!
>
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
>



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] A transaction in transaction? Possible?

2004-11-09 Thread Andrei Bintintan




Is it possible 
to have another transatction in a transaction??? In the following example the 
last ROLLBACK is totally ignored(transaction1). 
//connect to 
database$database 
= 
dbConnect($dbhost, $dbuser, $dbpass, $dbname);dbExec($database, "BEGIN"); 
//transaction1//*    
    dbExec($database, "BEGIN");//transaction2    
$sql = "UPDATE orders SET technikernotiz='51' WHERE 
id=16143";            
dbExec($database, $sql);        
dbExec($database, "COMMIT");//transaction2/**/    $sql = "UPDATE orders SET 
reklamationsdetail='51' WHERE id=16143";    
dbExec($database, $sql);dbExec($database, "ROLLBACK");//transaction1dbClose($database);
 


[SQL] upper/lower for german characters

2004-11-09 Thread Andrei Bintintan



Hi to all, I have the problem that: 
select lower('MöBÜEL') or select upper('MöBÜEL') 
are not working well. 
I read on some forums that there is some locale 
setting that needs to be done here, but could not fix this.
I am using the ASCII encoding.
 
Please advice. 
Thakx.Andy.


Re: [SQL] A transaction in transaction? Possible?

2004-11-10 Thread Andrei Bintintan
Okay, I see you're speaking about pgsql 8.0

What about 7.4?

Andy.

- Original Message -
From: "Achilleus Mantzios" <[EMAIL PROTECTED]>
To: "Michael Fuhr" <[EMAIL PROTECTED]>
Cc: "Riccardo G. Facchini" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>; "Theodore Petrosky" <[EMAIL PROTECTED]>; "Andrei
Bintintan" <[EMAIL PROTECTED]>; "sad" <[EMAIL PROTECTED]>
Sent: Wednesday, November 10, 2004 12:58 PM
Subject: Re: [SQL] A transaction in transaction? Possible?


> O Michael Fuhr έγραψε στις Nov 10, 2004 :
>
> > On Wed, Nov 10, 2004 at 12:45:19AM -0800, Riccardo G. Facchini wrote:
> >
> > > Sorry, but I understand that your example is not really about nested
> > > transactions, but about sequential transactions.
> >
> > Here's a more elaborate example.  If this doesn't demonstrate the
> > capability you're looking for, then please provide an example of
> > what you'd like to do and describe the desired behavior.
> >
> > CREATE TABLE person (id SERIAL PRIMARY KEY, name TEXT NOT NULL);
> >
> > BEGIN;
> > INSERT INTO person (name) VALUES ('Alice');
> >
> > SAVEPOINT s1;
> > INSERT INTO person (name) VALUES ('Bob');
> >
> > SAVEPOINT s2;
> > INSERT INTO person (name) VALUES ('Charles');
> >
> > SAVEPOINT s3;
> > INSERT INTO person (name) VALUES ('David');
> > ROLLBACK TO s3;
> >
> > INSERT INTO person (name) VALUES ('Edward');
> > ROLLBACK TO s2;
> >
> > INSERT INTO person (name) VALUES ('Frank');
> > RELEASE s1;
> >
> > INSERT INTO person (name) VALUES ('George');
> > COMMIT;
>
> Just a very naive thought
> Wouldn't make more sense to allow nested begin/commit/rollback blocks?
>
> >
> > SELECT * FROM person;
> >  id |  name
> > +
> >   1 | Alice
> >   2 | Bob
> >   6 | Frank
> >   7 | George
> >
> > If you change "ROLLBACK TO s2" to "RELEASE s2" then you get this:
> >
> >  id |  name
> > +-
> >   1 | Alice
> >   2 | Bob
> >   3 | Charles
> >   5 | Edward
> >   6 | Frank
> >   7 | George
> >
> > If you change "RELEASE s1" to "ROLLBACK TO s1" then you get this:
> >
> >  id |  name
> > +
> >   1 | Alice
> >   7 | George
> >
> >
>
> --
> -Achilleus
>
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] SQL timestamp to date cast

2005-01-20 Thread Andrei Bintintan



Hi, 
 
I have the following query:
 
SELECT DISTINCT(nummer) FROM user_action WHERE 
datetime::date='11/11/2004' AND id_action IN (5,6,9)
 

Now, datetime has the type timestamp. How can I 
make an index or write different this query so that it runs faster? It really 
takes some time sometimes. Usually about 3-4secs. user_action has about 300k 
rows and increasing ~ 5-10k a day. 
 
 
Explain analyze 
SELECT DISTINCT(nummer) FROM user_action WHERE datetime::date='11/11/2004' AND 
id_action IN (5,6,9)
 
Unique  (cost=18141.71..18143.72 rows=45 
width=4) (actual time=418.122..418.340 rows=85 loops=1)  ->  
Sort  (cost=18141.71..18142.72 rows=402 width=4) (actual 
time=418.119..418.194 rows=192 
loops=1)    Sort Key: 
nummer    ->  Seq Scan on 
user_action  (cost=0.00..18124.33 rows=402 width=4) (actual 
time=366.240..417.890 rows=192 
loops=1)  
Filter: (((datetime)::date = '2004-11-11'::date) AND ((id_action = 5) OR 
(id_action = 6) OR (id_action = 9)))Total runtime: 418.419 
ms
 
Best regards.
Andy.


Re: [SQL] SQL timestamp to date cast

2005-01-20 Thread Andrei Bintintan
Another possibility would be to create a functional index on datetime:
CREATE INDEX user_action_date_idx ON user_action (date(datetime));
GREAT!!! I thought it is possible but I didn't knew how to make such 
indexes. :))

Thank you !!!
Andy.
- Original Message - 
From: "Michael Fuhr" <[EMAIL PROTECTED]>
To: "Andrei Bintintan" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, January 20, 2005 11:33 AM
Subject: Re: [SQL] SQL timestamp to date cast


On Thu, Jan 20, 2005 at 10:52:50AM +0200, Andrei Bintintan wrote:
SELECT DISTINCT(nummer)
FROM user_action
WHERE datetime::date='11/11/2004' AND id_action IN (5,6,9)
Now, datetime has the type timestamp. How can I make an index or write
different this query so that it runs faster?
You could create an index on datetime and rewrite your queries:
CREATE INDEX user_action_datetime_idx ON user_action (datetime);
SELECT DISTINCT(nummer)
FROM user_action
WHERE datetime >= '11/11/2004' AND datetime < '11/12/2004'
 AND id_action IN (5,6,9);
Another possibility would be to create a functional index on datetime:
CREATE INDEX user_action_date_idx ON user_action (date(datetime));
SELECT DISTINCT(nummer)
FROM user_action
WHERE date(datetime) = '11/11/2004' AND id_action IN (5,6,9);
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] OFFSET impact on Performance???

2005-01-20 Thread Andrei Bintintan




Hi to all, I have the following 2 examples. Now, 
regarding on the offset if it is small(10) or big(>5) what is the impact 
on the performance of the query?? I noticed that if I return more 
data's(columns) or if I make more joins then the query runs even 
slower if the OFFSET is bigger. How can I somehow improve the performance on 
this? 
Best regards, Andy.
explain analyzeSELECT 
o.idFROM 
report r INNER JOIN orders o ON 
o.id=r.id_order AND o.id_status=6ORDER BY 1 LIMIT 10 OFFSET 10
 
Limit  (cost=44.37..88.75 rows=10 width=4) 
(actual time=0.160..0.275 rows=10 loops=1)  ->  Merge 
Join  (cost=0.00..182150.17 rows=41049 width=4) (actual time=0.041..0.260 
rows=20 loops=1)    Merge Cond: 
("outer".id_order = "inner".id)    
->  Index Scan using report_id_order_idx on report r  
(cost=0.00..157550.90 rows=42862 width=4) (actual time=0.018..0.075 rows=20 
loops=1)    ->  Index Scan 
using orders_pkey on orders o  (cost=0.00..24127.04 rows=42501 width=4) 
(actual time=0.013..0.078 rows=20 
loops=1)  
Filter: (id_status = 6)Total runtime: 0.373 ms

explain analyzeSELECT 
o.idFROM 
report r INNER JOIN orders o ON 
o.id=r.id_order AND o.id_status=6ORDER BY 1 LIMIT 10 OFFSET 100Limit  (cost=31216.85..31216.85 rows=1 width=4) (actual 
time=1168.152..1168.152 rows=0 loops=1)  ->  Sort  
(cost=31114.23..31216.85 rows=41049 width=4) (actual time=1121.769..1152.246 
rows=42693 loops=1)    Sort Key: 
o.id    ->  Hash Join  
(cost=2329.99..27684.03 rows=41049 width=4) (actual time=441.879..925.498 
rows=42693 
loops=1)  
Hash Cond: ("outer".id_order = 
"inner".id)  
->  Seq Scan on report r  (cost=0.00..23860.62 rows=42862 width=4) 
(actual time=38.634..366.035 rows=42864 
loops=1)  
->  Hash  (cost=2077.74..2077.74 rows=42501 width=4) (actual 
time=140.200..140.200 rows=0 
loops=1)    
->  Seq Scan on orders o  (cost=0.00..2077.74 rows=42501 width=4) 
(actual time=0.059..96.890 rows=42693 
loops=1)  
Filter: (id_status = 6)Total runtime: 1170.586 
ms


Re: [SQL] OFFSET impact on Performance???

2005-01-20 Thread Andrei Bintintan
If you're using this to provide "pages" of results, could you use a 
cursor?
What do you mean by that? Cursor?
Yes I'm using this to provide "pages", but If I jump to the last pages it 
goes very slow.

Andy.
- Original Message - 
From: "Richard Huxton" 
To: "Andrei Bintintan" <[EMAIL PROTECTED]>
Cc: ; 
Sent: Thursday, January 20, 2005 2:10 PM
Subject: Re: [SQL] OFFSET impact on Performance???


Andrei Bintintan wrote:
Hi to all,
I have the following 2 examples. Now, regarding on the offset if it
is small(10) or big(>5) what is the impact on the performance of
the query?? I noticed that if I return more data's(columns) or if I
make more joins then the query runs even slower if the OFFSET is
bigger. How can I somehow improve the performance on this?
There's really only one way to do an offset of 1000 and that's to fetch 
1000 rows and then some and discard the first 1000.

If you're using this to provide "pages" of results, could you use a 
cursor?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-21 Thread Andrei Bintintan
Now I read all the posts and I have some answers.
Yes, I have a web aplication.
I HAVE to know exactly how many pages I have and I have to allow the user to 
jump to a specific page(this is where I used limit and offset). We have this 
feature and I cannot take it out.


> SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
Now this solution looks very fast, but I cannot implement it, because I 
cannot jump from page 1 to page  only to page 2. Because I know with 
this type where did the page 1 ended. And we have some really complicated 
where's and about 10 tables are involved in the sql query.

About the CURSOR I have to read more about them because this is my first 
time when I hear about.
I don't know if temporary tables are a solution, really I don't think so, 
there are a lot of users that are working in the same time at the same page.

So... still DIGGING for solutions.
Andy.
- Original Message - 
From: "Ragnar Hafstað" <[EMAIL PROTECTED]>
To: 
Cc: "Andrei Bintintan" <[EMAIL PROTECTED]>; 
Sent: Thursday, January 20, 2005 9:23 PM
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???


On Thu, 2005-01-20 at 19:12 +, Ragnar Hafstað wrote:
On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
> The best way to do pages for is not to use offset or cursors but to use 
> an
> index. This only works if you can enumerate all the sort orders the
> application might be using and can have an index on each of them.
>
> To do this the query would look something like:
>
> SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50
>
> Then you take note of the last value used on a given page and if the 
> user
> selects "next" you pass that as the starting point for the next page.

this will only work unchanged if the index is unique. imagine , for
example if you have more than 50 rows with the same value of col.
one way to fix this is to use ORDER BY col,oid
and a slightly more complex WHERE clause as well, of course
gnari


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-26 Thread Andrei Bintintan
The problems still stays open.
The thing is that I have about 20 - 30 clients that are using that SQL query 
where the offset and limit are involved. So, I cannot create a temp table, 
because that means that I'll have to make a temp table for each session... 
which is a very bad ideea. Cursors somehow the same. In my application the 
Where conditions can be very different for each user(session) apart.

The only solution that I see in the moment is to work at the query, or to 
write a more complex where function to limit the results output. So no 
replace for Offset/Limit.

Best regards,
Andy.
- Original Message - 
From: "Greg Stark" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Richard Huxton" ; "Andrei Bintintan" 
<[EMAIL PROTECTED]>; ; 

Sent: Tuesday, January 25, 2005 8:28 PM
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???


Alex Turner <[EMAIL PROTECTED]> writes:
I am also very interesting in this very question.. Is there any way to
declare a persistant cursor that remains open between pg sessions?
This would be better than a temp table because you would not have to
do the initial select and insert into a fresh table and incur those IO
costs, which are often very heavy, and the reason why one would want
to use a cursor.
TANSTAAFL. How would such a persistent cursor be implemented if not by
building a temporary table somewhere behind the scenes?
There could be some advantage if the data were stored in a temporary table
marked as not having to be WAL logged. Instead it could be automatically
cleared on every database start.
--
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] Constraint on 2 column possible?

2005-01-27 Thread Andrei Bintintan



Hi, 
 
I have a table:CREATE TABLE werke1(id SERIAL,id_hr int4 NOT NULL,id_wk int4 NOT NULL);
 
CREATE TABLE contact(id SERIAL,type 
varchar(20),
);
 
 
Now id_hr and id_wk are all referencing the same 
table contact(id). In the contact table I have another column called type. 

How can I write a constraint that checks that id_hr 
references contact(id) and the contact(type='t1')
and that id_wk references contact(id) and the 
contact(type='t2'). 
 
More explicit: the id_hr shows to the id from 
contact, and this line from contact must have the line type='t1'. The same for 
id_wk just the type is another.
 
I can write:
CREATE TABLE werke1(id 
SERIAL,id_hr int4 
NOT NULL 
references contact(id),id_wk int4 NOT NULL references contact(id));
but how do I check also the type 
column?
 
Best regards,
Andy.


[SQL] Create trigger for auto update function

2005-07-18 Thread Andrei Bintintan



Hi to all, 
 
I have a table: 
create table hoy(
id serial,
pass varchar(40), 
pass_md5 varchar(40);
 
Now, I want to write a trigger function that 
automatically updates the pass_md5 with the md5 function of the pass. 

 
I tried this:
 
CREATE FUNCTION update_pass(integer) RETURNS 
integer AS $$    UPDATE hoy SET pass_md5=md5(pass) WHERE 
id=$1;   SELECT 1;$$ LANGUAGE SQL;
 
and 
 
CREATE TRIGGER triger_users_pass_md5 
 AFTER INSERT OR UPDATE ON hoy    
EXECUTE PROCEDURE update_pass(integer); 
 
 
But it works not. 
When I create the trigger it says that function 
does not exist. 
 
I also tried with:
 
CREATE OR REPLACE FUNCTION user2(integer)RETURNS TRIGGER 
AS'BEGIN    UPDATE users SET pass_md5=md5(pass) WHERE 
id=$1; return NULL;END'language plpgsql;
 the same
 
 
Need some help
 
Andy.