Well, you could add a serial column. I'll tell you how, but I haven't tested the code, so be sure to check it! And using BEGIN and COMMIT/ROLLBACK to delimit transactions would not be a bad idea at all ;-)To add a serial column, just write:
--create new serial field
ALTER TABLE md_customer ADD id
If you have a row every 15 seconds, the answer is quite easy:
SELECT
A1.date
FROM
activity A1
LEFT JOIN activity A2 ON (A2.date=A1.date-'15 secs'::interval)
WHERE
A1.state<>A2.state OR A2.state IS NULL
ORDER BY 1
Now if you don't have a row every 15 seconds, the answer is a bit more
maybe somthing like this:
CREATE OR REPLACE FUNCTION calendar (DATE, DATE) RETURNS SETOF DATE
LANGUAGE 'plpgsql' AS '
DECLARE
v_from ALIAS FOR $1;
v_to ALIAS FOR $2;
v_current DATE DEFAULT v_from;
BEGIN
WHILE (v_current<=v_to) LOOP
RETURN NEXT v_current;
v_curr
Maybe you should tweak the cpu_index_tuple_cost parameter instead of
disabling sequential scans. De default value is 0.001, you should
change it to a lower value (0.0005 or something).
Joel Fradkin wrote:
I tried the SET ENABLE_SEQSCAN=FALSE;
And the result took 29 secs instead of 117.
Aft
I've tested in a relation of mine, with about 20 attributes, and here
are the results:
test=# select count(*) from gestionestareas;
count
447681
(1 row)
test=# explain analyze select * from gestionestareas where agrupable;
QU
updating 40.000 records should take no longer than a couple of minutes.
I think you should optimise your query before going any further.
You have an inner SELECT sentence that executes before anything. It
joins EVERY row in your table (1,000,000+) with at most 3 other rows in
the same table, so
I understand data_sys is the average value for the 3 days, from at the
(Bday before to the day after.
(BThis should do what you want, in one pass. Check the average function in
(Bthe subselect. If what you want is to divide by 3 no matter how many
(Brecords where found, enable the commented lin
odification problem if you could keep
the transaction open, but I don't know if it's applicable in your case.
Andras Kutrovics wrote:
Franco Bruno Borghesi wrote:
Hi!
Sorry for being late with the answer, I was busy at one of our customer;)
wouldn't it be easier using offset & limit
wouldn't it be easier using offset & limit?: you always select from the table with an itemkey,location order by clause. You save the current offset between requests, and for every request you re-execute the query with a different offset.
If you still want to go with what you already have, you
If I understand well, you want the highest cmup for each partno, that is max(cmup) grouped by partno (only).
You can achieve this with a subselect, and then you join the results whith the query you already have:
SELECT T.partno, T.status, TMP.max_cmup_for_partno, max(T.cmup) AS max_cmup, sum
I asked if derived tables use primary key indexes generated in the base tables that they inherit from.
Franco Bruno Borghesi replied:
> [the derived table] is not inheriting any indexes, neither the
> primary key constraint.
OK, thanks! I had come to the conclusion that it was not using
Table T is not inheriting any indexes, neither the primary key constraint. That means that the search is going to use an index scan on table B and a sequential scan on table T (unless of course you add a primary key constraint or an index on table T).
You can check this things doing:
->SET en
I remember reading somewhere that it was possible to execute system commands from plperl... don't know if it's the best way, but you could do an exec of pg_dump from inside a plperl procedure.
On Fri, 2004-06-11 at 01:36, William Anthony Lim wrote:
is it possible to dump within procedural la
I see that attribute project is defined as integer in library, and as varchar(8) in clone. I suspect that's what causing the problem and forcing a seq scan on library.
On Thu, 2004-03-04 at 14:56, Charles Hauser wrote:
All,
I have the following query which is running quite slow on our server
... WHERE field1 LIKE '%' || field2 || '%'
or
... WHERE position(field2 in field1)>0
On Wed, 2003-11-12 at 11:07, Guy Korland wrote:
Hi,
How can I compare two fields if one is a substring of the other?
Something like:
... WHERE StringA like '%' + StringB + '%';
Thanks,
Guy Korland
--
Dopping the whole database just for a column change?
On Tue, 2003-10-28 at 10:00, Theodore Petrosky wrote:
why not just
pg_dump dbname > olddb.out
pico olddb.out
edit the section that defines the table
save and exit
dropdb dbname
createdb dbname
psql dbname < olddb.out
no fuss no muss..
SELECT * FROM pg_stat_activity;
On Fri, 2003-10-10 at 09:48, Chris Faulkner wrote:
Hello
Can anyone tell me - is there a system table or view that I can query to
show all current sessions ?
Thanks
Chris
---(end of broadcast)---
TIP 7: don
would
SELECT
groupid, activity_date,
sum(TMP.Attended) AS Attended,
sum(TMP.Unexcused) AS Unexcused,
sum(TMP.Absent) AS Absent,
sum(TMP.Called) AS Called
FROM (
SELECT groupid, activity_date,
count(CASE activity_subcode WHEN 100 THEN 1 ELSE NULL END) AS Attended,
count(CASE activity
This kind of conditions are left or right joins, depending on which side of the equal sign you have the (+).
Something like this
select ...
from
auswahlkatalog k,
INNER JOIN anspruchkorrektur a ON (k.awk_id = a.awk_id),
LEFT JOIN beteiligter b ON (b.bet_id = a.bet_idemp),
RIGHT JOI
I *guess* this query does the same as yours (please verify).
SELECT
L.*
FROM
lead L
INNER JOIN purchase P ON (L.id=P.lead_id)
INNER JOIN affiliate_lockout A ON
(L.affiliate_id=A.affiliate_locked_id)
INNER JOIN member_exclusion M ON
(P.member_id=M.member_id_t
If I don't get it worng, you wanna know how many threads you have for each forum, and how many posts you have for each thread... don't you?
maybe something like
SELECT
F.id AS forumId,
( SELECT count(id) FROM thread WHERE forumId=F.id ) AS threadCount,
T.id AS threadId,
( SEL
>21101) AND
(M.member_id IS NULL) AND
(A.member_id IS NULL)
On Tue, 2003-08-12 at 22:29, Stephan Szabo wrote:
On Tue, 12 Aug 2003, Franco Bruno Borghesi wrote:
> I *guess* this query does the same as yours (please verify).
It does not in general unfortunately. :( I see two possi
try this
DELETE FROM aap WHERE id NOT IN (
SELECT max(id)
FROM aap
GROUP BY keyword
);
>
>
> Hi,
>
> I have a table with duplicates and trouble with my SQL.
> I'd like to keep a single record and remove older duplicates.
> For example below of the 6 recods I'd like to keep records
> 4 a
If your concern is speed, the thing here is that you will have as many records
as there are in "mytable", most of them (I think) with NULLs for alias1, alias2,
alias3 and alias4. This way, there is no condition to filter any record, so
postgreSQL will do a sequential scan over the whole table.
If
if you're allowed to change the resultset structure, you could do:
SELECT
event,
level,
count(*)
FROM
baz
GROUP BY
event,
level;
event | level | count
---+---+---
x | 1 | 1
x | 2 | 1
x | 3 | 1
y | 2 | 1
y |
As far as I know, you always work with a ResultSet.
If you know your stored procedures will always return an Integer and you don't
wanna deal with the executeQuery and stuff every time, you could create a
class with methods explicitly for accesing your stored procedures, for
example:
assuming
yes, i was trying to do something like that, but it breaks always in the same
place, first I thought that it was because of the way I was assigning values
to the fields of my row, but now I'm beginning to think that the reason is
the way I pass the row to f2.
Here is the error:
f
Here is a full example of a java program showing the data from a set returning
function:
-
--IN YOUR DATABASE
CREATE TABLE people (name TEXT);
INSERT INTO people VALUES ('john');
INSERT INTO people VALUES ('peter');
INSERT INTO people VALUES ('joe');
CREATE FUNCTION getPe
Thanks Tom, I applied the patch and it works perfect now.
Thanks to you all.
On Thursday 13 March 2003 14:02, Tom Lane wrote:
> Manfred Koizar <[EMAIL PROTECTED]> writes:
> > ISTM you have found a Postgres 7.3 bug.
>
> Yeah. Actually, the planner bug has been there a long time, but it was
> only
on't want the user to let's say type
his ordercode and by mistake type the last char say 1 less then his own and
access someone else form and be completely lost. With random that still can
happen but it is so less likely that will do.
Thank you
Quoting:
> Bernardo de Barros Franco wr
Thanks in advance for any help.
Quoting:
> On Sun, Apr 15, 2001 at 10:58:40PM -0300, Bernardo de Barros Franco wrote:
>
> > I wanted to index a table by a random key. Exemplifying, when a insert
is
> > made, the id value is automatically filled with a random number between
> &g
have
2 different indexes, the table index and the sale code that would be used
for all queries.
But my only question would be, in the example quoted would id be really the
table index and is it unique?
Thank you
Quoting:
>Bernardo de Barros Franco writes:
>
> > I wanted to index
Hello, I was wondering if noone can help me maybe someone could at least
give me some directions where to look for info or where to ask:
I wanted to index a table by a random key. Exemplifying, when a insert is
made, the id value is automatically filled with a random number between
1 and 9
33 matches
Mail list logo