Re: [GENERAL] Terms advice.

2010-11-26 Thread Craig Ringer

On 11/26/2010 09:37 PM, Dmitriy Igrishin wrote:

Hey all,

I am working on C++ library to work with PostgreSQL.


Are you aware of libpqxx ?

Is your intent to implement the protocol from scratch in c++ rather than 
wrap libpq? If so, why?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help on explain analyze

2010-11-26 Thread Marc Mamin
Hello,

did you also try defininig partial indexes?

e.g. 
CREATE INDEX xx on task_definitions (ctrlid) WHERE (name::text = 'UseWSData')
CREATE INDEX yy on ctrl_definitions (ctrlid) WHERE (name::text = 
'IrrPeriodStart')

HTH,

Marc Mamin

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Leif Jensen
Sent: Freitag, 26. November 2010 06:04
To: pgsql-general
Subject: [GENERAL] Help on explain analyze

   Hi guys,

   I have a rather complex view that sometimes takes an awful long time to 
execute. I have tried to do an 'explain analyze' on it. My intention was to try 
to optimize the tables involved by creating some indexes to help the lookup. I 
looked for the "Seq Scan's and created appropriate indexes, I thought. However, 
in most cases the search got even slower. I have "expanded" the view as follows:

cims=# explain analyze select * from (SELECT t.id AS oid, d.id AS devid, 
d.description AS devname, cd.value AS period, upper(dt.typename::text) AS 
devtype, (date_part('epoch'::text, timezone('GMT'::text, t.firstrun))::bigint - 
(z.timezone::integer - 
CASE
WHEN z.daylightsaving <> 'Y'::bpchar THEN 0
ELSE 
CASE
WHEN cy.dl_start < now() AND now() < cy.dl_finish THEN 1
ELSE 0
END
END) * 3600) % 86400::bigint AS firstrun, t."interval", t.id AS tid, 
ti.id AS tiid, t.runstatus, t.last, tt.handler, td.value AS ctrlid, td.string 
AS alarm, z.timezone AS real_timezone, cy.dl_start < now() AND now() < 
cy.dl_finish AS daylight, z.timezone::integer - 
CASE
WHEN z.daylightsaving <> 'Y'::bpchar THEN 0
ELSE 
CASE
WHEN cy.dl_start < now() AND now() < cy.dl_finish THEN 1
ELSE 0
END
END AS timezone
   FROM device d
   LEFT JOIN task_info ti ON ti.ctrlid = d.id
   LEFT JOIN task t ON t.id = ti.taskid
   LEFT JOIN ctrl_definitions cd ON d.id = cd.ctrlid AND cd.name::text = 
'IrrPeriodStart'::text, task_type tt, task_definitions td, devtype dt, 
ctrl_definitions cd2, zip z, county cy
  WHERE td.name = 'UseWSData'::text AND ti.id = td.taskinfoid AND d.devtypeid = 
dt.id AND tt.id = t.tasktypeid AND (tt.handler = 'modthcswi.so'::text OR 
tt.handler = 'modthcswb.so'::text) AND btrim(cd2.string) = z.zip::text AND 
cd2.ctrlid = td.value AND cd2.name::text = 'ZIP'::text AND z.countyfips = 
cy.countyfips AND z.state = cy.state AND date_part('year'::text, now()) = 
date_part('year'::text, cy.dl_start)) AS wstaskdist
  WHERE wstaskdist.ctrlid = 401 AND CAST( alarm AS boolean ) = 't';

  The view is actually the sub-SELECT which I have name 'wstaskdist', and my 
search criteria is the bottom WHERE. The result of the ANALYZE is:


QUERY PLAN  
  
--
 Nested Loop  (cost=284.88..9767.82 rows=1 width=109) (actual 
time=2515.318..40073.432 rows=10 loops=1)
   ->  Nested Loop  (cost=284.88..9745.05 rows=70 width=102) (actual 
time=2515.184..40071.697 rows=10 loops=1)
 ->  Nested Loop  (cost=229.56..5692.38 rows=1 width=88) (actual 
time=2512.044..39401.729 rows=10 loops=1)
   ->  Nested Loop  (cost=229.56..5692.07 rows=1 width=80) (actual 
time=2511.999..39401.291 rows=10 loops=1)
 ->  Nested Loop  (cost=229.56..5691.76 rows=1 width=77) 
(actual time=2511.943..39400.680 rows=10 loops=1)
   Join Filter: (ti.id = td.taskinfoid)
   ->  Seq Scan on task_definitions td  
(cost=0.00..13.68 rows=1 width=22) (actual time=0.204..0.322 rows=10 loops=1)
 Filter: ((name = 'UseWSData'::text) AND (value 
= 401) AND (string)::boolean)
   ->  Hash Left Join  (cost=229.56..5672.72 rows=429 
width=59) (actual time=7.159..3939.536 rows=429 loops=10)
 Hash Cond: (d.id = cd.ctrlid)
 ->  Nested Loop  (cost=24.66..5442.80 rows=429 
width=55) (actual time=6.797..3937.349 rows=429 loops=10)
   ->  Hash Join  (cost=16.65..282.84 
rows=429 width=38) (actual time=0.078..6.587 rows=429 loops=10)
 Hash Cond: (t.id = ti.taskid)
 ->  Seq Scan on task t  
(cost=0.00..260.29 rows=429 width=30) (actual time=0.022..5.089 rows=429 
loops=10)
 ->  Hash  (cost=11.29..11.29 
rows=429 width=12) (actual time=0.514..0.514 rows=429 loops=1)
   ->  Seq Scan on task_info ti 
 (cost=0.00..11.29 rows=429 

Re: [GENERAL] number of not null arguments

2010-11-26 Thread Pavel Stehule
Hello

this function doesn't exists, but you can you to write (min PostgreSQL 8.4)

 create or replace function notnull_count(variadic anyarray) returns
int as $$select count(x)::int from unnest($1) g(x)$$ language sql;

it working just for scalar types:

pavel=# SELECT notnull_count(1, 1, NULL, NULL); notnull_count
───
 2
(1 row)

it doesn't working for arrays, but you can to little bit modify query

pavel=#  SELECT notnull_count(array_upper(ARRAY[1,2,3],1),
array_upper(ARRAY[10,20,30],1), NULL, array_upper(ARRAY[NULL],1));
 notnull_count
───
 3
(1 row)

next (but general solution) is custom function in C - it can be very simple

Regards

Pavel Stehule


2010/11/26 Murat Kabilov :
> Hello,
> Is there a function which returns number of not null arguments?
> SELECT notnull_count(1, 1, NULL, NULL)
>  notnull_count
> ---
>              2
> SELECT notnull_count(ARRAY[1,2,3], ARRAY[10,20,30], NULL, ARRAY[NULL])
>  notnull_count
> ---
>              3
> Thanks
> --
> Murat Kabilov

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help on explain analyze

2010-11-26 Thread Leif Jensen
   Hi guys,

   I have a rather complex view that sometimes takes an awful long time to 
execute. I have tried to do an 'explain analyze' on it. My intention was to try 
to optimize the tables involved by creating some indexes to help the lookup. I 
looked for the "Seq Scan's and created appropriate indexes, I thought. However, 
in most cases the search got even slower. I have "expanded" the view as follows:

cims=# explain analyze select * from (SELECT t.id AS oid, d.id AS devid, 
d.description AS devname, cd.value AS period, upper(dt.typename::text) AS 
devtype, (date_part('epoch'::text, timezone('GMT'::text, t.firstrun))::bigint - 
(z.timezone::integer - 
CASE
WHEN z.daylightsaving <> 'Y'::bpchar THEN 0
ELSE 
CASE
WHEN cy.dl_start < now() AND now() < cy.dl_finish THEN 1
ELSE 0
END
END) * 3600) % 86400::bigint AS firstrun, t."interval", t.id AS tid, 
ti.id AS tiid, t.runstatus, t.last, tt.handler, td.value AS ctrlid, td.string 
AS alarm, z.timezone AS real_timezone, cy.dl_start < now() AND now() < 
cy.dl_finish AS daylight, z.timezone::integer - 
CASE
WHEN z.daylightsaving <> 'Y'::bpchar THEN 0
ELSE 
CASE
WHEN cy.dl_start < now() AND now() < cy.dl_finish THEN 1
ELSE 0
END
END AS timezone
   FROM device d
   LEFT JOIN task_info ti ON ti.ctrlid = d.id
   LEFT JOIN task t ON t.id = ti.taskid
   LEFT JOIN ctrl_definitions cd ON d.id = cd.ctrlid AND cd.name::text = 
'IrrPeriodStart'::text, task_type tt, task_definitions td, devtype dt, 
ctrl_definitions cd2, zip z, county cy
  WHERE td.name = 'UseWSData'::text AND ti.id = td.taskinfoid AND d.devtypeid = 
dt.id AND tt.id = t.tasktypeid AND (tt.handler = 'modthcswi.so'::text OR 
tt.handler = 'modthcswb.so'::text) AND btrim(cd2.string) = z.zip::text AND 
cd2.ctrlid = td.value AND cd2.name::text = 'ZIP'::text AND z.countyfips = 
cy.countyfips AND z.state = cy.state AND date_part('year'::text, now()) = 
date_part('year'::text, cy.dl_start)) AS wstaskdist
  WHERE wstaskdist.ctrlid = 401 AND CAST( alarm AS boolean ) = 't';

  The view is actually the sub-SELECT which I have name 'wstaskdist', and my 
search criteria is the bottom WHERE. The result of the ANALYZE is:


QUERY PLAN  
  
--
 Nested Loop  (cost=284.88..9767.82 rows=1 width=109) (actual 
time=2515.318..40073.432 rows=10 loops=1)
   ->  Nested Loop  (cost=284.88..9745.05 rows=70 width=102) (actual 
time=2515.184..40071.697 rows=10 loops=1)
 ->  Nested Loop  (cost=229.56..5692.38 rows=1 width=88) (actual 
time=2512.044..39401.729 rows=10 loops=1)
   ->  Nested Loop  (cost=229.56..5692.07 rows=1 width=80) (actual 
time=2511.999..39401.291 rows=10 loops=1)
 ->  Nested Loop  (cost=229.56..5691.76 rows=1 width=77) 
(actual time=2511.943..39400.680 rows=10 loops=1)
   Join Filter: (ti.id = td.taskinfoid)
   ->  Seq Scan on task_definitions td  
(cost=0.00..13.68 rows=1 width=22) (actual time=0.204..0.322 rows=10 loops=1)
 Filter: ((name = 'UseWSData'::text) AND (value 
= 401) AND (string)::boolean)
   ->  Hash Left Join  (cost=229.56..5672.72 rows=429 
width=59) (actual time=7.159..3939.536 rows=429 loops=10)
 Hash Cond: (d.id = cd.ctrlid)
 ->  Nested Loop  (cost=24.66..5442.80 rows=429 
width=55) (actual time=6.797..3937.349 rows=429 loops=10)
   ->  Hash Join  (cost=16.65..282.84 
rows=429 width=38) (actual time=0.078..6.587 rows=429 loops=10)
 Hash Cond: (t.id = ti.taskid)
 ->  Seq Scan on task t  
(cost=0.00..260.29 rows=429 width=30) (actual time=0.022..5.089 rows=429 
loops=10)
 ->  Hash  (cost=11.29..11.29 
rows=429 width=12) (actual time=0.514..0.514 rows=429 loops=1)
   ->  Seq Scan on task_info ti 
 (cost=0.00..11.29 rows=429 width=12) (actual time=0.020..0.302 rows=429 
loops=1)
   ->  Bitmap Heap Scan on device d  
(cost=8.01..12.02 rows=1 width=21) (actual time=9.145..9.146 rows=1 loops=4290)
 Recheck Cond: (d.id = ti.ctrlid)
 ->  Bitmap Index Scan on pk_device 
 (cost=0.00..8.01 rows=1 width=0) (actual time=0.463..0.463 rows=1569 
loops=4290)

[GENERAL] number of not null arguments

2010-11-26 Thread Murat Kabilov
Hello,

Is there a function which returns number of not null arguments?

SELECT notnull_count(1, 1, NULL, NULL)
 notnull_count
---
 2

SELECT notnull_count(ARRAY[1,2,3], ARRAY[10,20,30], NULL, ARRAY[NULL])
 notnull_count
---
 3

Thanks

--
Murat Kabilov


Re: [GENERAL] plpyhton

2010-11-26 Thread Martin Gainty

depends on the configuration implemented to enable caching capability for each 
type
*usually* fastest access can be achived by implementin a Procedure which loads 
into Procedure Cache
allowing consequent accesses to the Procedure 'in memory' (vs Disk I/O)


Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité

Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.

Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.



 

> Date: Fri, 26 Nov 2010 09:04:42 -0700
> From: eggyk...@gmail.com
> To: shreeseva.learn...@gmail.com
> CC: pgsql-ad...@postgresql.org; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] plpyhton
> 
> On Fri, Nov 26, 2010 at 05:28:52PM +0530, c k wrote:
> > Thanks for your reply.
> > But if a database has 100+ connections then isn't loading any such
> > interpreter consumes more memory and requires more CPU? Does all PL
> > languages behave in the same fashion?
> 
> If there are lots of connections, and each calls a plpython function (for
> example), then each will load a python interpreter, and certainly that could
> add up to serious memory usage. I can't speak for *every* PL; C functions
> don't load any special interpreter, for instance, and I don't think there's
> anything special you have to load to run SQL functions, beyond what gets
> loaded anyway.
> 
> If you have problems with hundreds of connections using too much memory when
> each loads an interpreter, you ought to consider getting more memory, using a
> connection pooler, changing how you do things, or some combination of the
> above.
> 
> --
> Joshua Tolley / eggyknap
> End Point Corporation
> http://www.endpoint.com
  

Re: [GENERAL] plpyhton

2010-11-26 Thread Joshua Tolley
On Fri, Nov 26, 2010 at 05:28:52PM +0530, c k wrote:
> Thanks for your reply.
> But if a database has 100+ connections then isn't loading any such
> interpreter consumes more memory and requires more CPU? Does all PL
> languages behave in the same fashion?

If there are lots of connections, and each calls a plpython function (for
example), then each will load a python interpreter, and certainly that could
add up to serious memory usage. I can't speak for *every* PL; C functions
don't load any special interpreter, for instance, and I don't think there's
anything special you have to load to run SQL functions, beyond what gets
loaded anyway.

If you have problems with hundreds of connections using too much memory when
each loads an interpreter, you ought to consider getting more memory, using a
connection pooler, changing how you do things, or some combination of the
above.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


[GENERAL] Terms advice.

2010-11-26 Thread Dmitriy Igrishin
Hey all,

I am working on C++ library to work with PostgreSQL.

I am trying to follow strong correctness in terms.
One of my current dilemmas is how to distinguish the results
of commands in correct terms. E.g., is it correct to call commands
which returns tuples as "queries" but commands like which does
not returns tuples (e.g. "BEGIN") as "commands" ?

Any advises are welcome !

-- 
// Dmitriy.


Re: [GENERAL] plpyhton

2010-11-26 Thread c k
Thanks for your reply.
But if a database has 100+ connections then isn't loading any such
interpreter consumes more memory and requires more CPU? Does all PL
languages behave in the same fashion?

Regards,
CPK

On Thu, Nov 25, 2010 at 11:12 AM, Joshua Tolley  wrote:

> On Wed, Nov 24, 2010 at 11:56:16AM +0530, c k wrote:
> > Hello,
> > Does calling a pl/python function from each database connection load the
> > python interpreter each time? what are the effects of using pl/python
> > function in a environment where no. of concurrent connections are more
> and
> > each user calls a pl/python function?
> >
> > Please give the details about how pl/python functions are executed.
> > Thanks and regards,
> >
> > CPK
>
> I don't know plpython terribly well, but for most PLs, calling them once in
> a
> session loads any interpreter they require. That interpreter remains loaded
> for the duration of the session. So each individual connection will load
> its
> own interpreter, once, at the time of the first function call requiring
> that
> interpreter. Most widely used languages also cache various bits of
> important
> information about the functions you run, the first time you run them in a
> session, to avoid needing to look up or calculate that information again
> when
> you run the function next time.
>
> --
> Joshua Tolley / eggyknap
> End Point Corporation
> http://www.endpoint.com
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkzt918ACgkQRiRfCGf1UMOFvACdH2lXcrCzbOAgX803Ujvfvw0m
> LOUAoJykSFne3ybVsyabQKJQjlIm6iiQ
> =8rD2
> -END PGP SIGNATURE-
>
>


Re: [GENERAL] Question about catching exception

2010-11-26 Thread tv
> Hello
>
> you have to parse a sqlerrm variable

That's one way to do that. Another - more complex but more correct in many
cases is using two separate blocks.

BEGIN
   ... do stuff involving constraint A
EXCEPTION
   WHEN unique_violation THEN ...
END;

BEGIN
   ... do stuff involving constraint B
EXCEPTION
   WHEN unique_violation THEN ...
END;

But that's not possible if there are two unique constraints involved in a
single SQL statement (e.g. inserting into a table with two unique
constraints).

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about catching exception

2010-11-26 Thread Pavel Stehule
Hello

you have to parse a sqlerrm variable

CREATE OR REPLACE FUNCTION public.test(a integer, b integer)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
begin
  insert into foo values(a,b);
  exception when  unique_violation then
raise notice '% %', sqlerrm, sqlstate;
end;
$function$

postgres=# select test(4,2);
NOTICE:  duplicate key value violates unique constraint "foo_b_key" 23505
 test
──

(1 row)

Time: 9.801 ms
postgres=# select test(3,2);
NOTICE:  duplicate key value violates unique constraint "foo_a_key" 23505
 test
──

(1 row)

Time: 17.167 ms

regards

Pavel Stehule



> If the "do stuff" part can result in two different unique_violation
> exception (having two unique constraints), how can I detect which one
> was triggered?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about catching exception

2010-11-26 Thread A B
Hello!

I have a question about catching exceptions.

If I write a plpgsql function like this

begin
   do stuff;
exception
 when X then

when Y then
  ...
end;

If the "do stuff" part can result in two different unique_violation
exception (having two unique constraints), how can I detect which one
was triggered?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general