[SQL] No response from the backend

2011-12-07 Thread feng.zhou
Hi
There are some questions when using progreSQL8.4.2. I don't sure this problem 
is bug.The detail is following:
I make a client program that communicate with  progreSQL by ODBC on Windows7. 
After running serveral months, this client program is hung at 00:01 in 
2011-11-26 , and become OK after 8 hours.When becoming OK, client program log 
message is following: 

2011/11/26 08:38:05.527,  : SQL ERROR: GetSelectRowNum: No response from the 
backend;
No response from the backend
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: State:42601,Native:26
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: Could not send 
Query(connection dead);
Could not send Query(connection dead)

I don't know how to solve this problem. Someone can explain this problem 
cause.Thanks

2011-12-07 



feng.zhou 


Re: [SQL] No response from the backend

2011-12-07 Thread Craig Ringer

On 12/07/2011 07:27 PM, feng.zhou wrote:

Hi
There are some questions when using progreSQL8.4.2. I don't sure this 
problem is bug.The detail is following:
I make a client program that communicate with  progreSQL by ODBC on 
Windows7. After running serveral months, this client program is hung 
at 00:01 in 2011-11-26 , and become OK after 8 hours.When becoming OK, 
client program log message is following:
2011/11/26 08:38:05.527,  : SQL ERROR: GetSelectRowNum: 
No response from the backend;

No response from the backend
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: State:42601,Native:26
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: 
Could not send Query(connection dead);

Could not send Query(connection dead)
I don't know how to solve this problem. Someone can explain this 
problem cause.Thanks


I'd say you had a connectivity drop-out, and it only recovered when your 
TCP/IP connection timed out. Reduce your TCP/IP timeouts and/or enable 
keepalives if you have an unreliable network connection between client 
and server.


If that's not the case, then more detail please. Where do the client and 
server run (same machine?)? How are they connected? etc.


--
Craig Ringer


[SQL] Correlating Asterisk CDRs

2011-12-07 Thread Raj Mathur (राज माथुर)
Hi,

I'm trying to correlate Call Data Records (CDRs) from two Asterisk
servers, one of which uses the other for telephony.  The data is in
the tables cdr and cdr2.  With some indexes, the query and explain
result are:

explain analyse select cdr.calldate, cdr2.calldate,
(cdr2.calldate-cdr.calldate) as rtdur, cdr.clid, cdr.dst, cdr2.src,
cdr2.dst, cdr2.dstchannel, cdr2.lastapp, cdr2.duration,
cdr2.disposition from cdr, cdr2 where cdr2.calldate >= cdr.calldate
and cdr.clid=cdr2.clid and cdr.dst=substring(cdr2.dst from 4) order by
cdr.calldate, cdr2.calldate, cdr.clid limit 100;

 QUERY PLAN 
 
-
 Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual 
time=4077.866..4078.054 
rows=100 loops=1)
   ->  Sort  (cost=46782.15..46785.33 rows=1272 width=109) (actual 
time=4077.863..4077.926 
rows=100 loops=1)
 Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
 Sort Method:  top-N heapsort  Memory: 42kB
 ->  Merge Join  (cost=2.95..46733.54 rows=1272 width=109) (actual 
time=0.070..3799.546 rows=168307 loops=1)
   Merge Cond: (((cdr.clid)::text = (cdr2.clid)::text) AND 
((cdr.dst)::text = 
"substring"((cdr2.dst)::text, 4)))
   Join Filter: (cdr2.calldate >= cdr.calldate)
   ->  Index Scan using ick1 on cdr  (cost=0.00..34667.86 
rows=208798 
width=43) (actual time=0.022..434.246 rows=208798 loops=1)
   ->  Index Scan using i2k1 on cdr2  (cost=0.00..9960.89 
rows=65449 width=88) 
(actual time=0.011..391.599 rows=240981 loops=1)
 Total runtime: 4078.184 ms
(10 rows)

Is there any way to make this query faster?  I already have an index
i2k1 on substring(cdr2.dst from 4), which is being used.

Application
---

I'm looking for all caller records in cdr2 that have the same callerid
(clid) and destination (dst) and were started on cdr2 after they were
started on cdr.  cdr2.dst is the same as cdr.dst but with a
3-character prefix.

Regards,

-- Raj
-- 
Raj Mathur  || r...@kandalaya.org   || GPG:
http://otheronepercent.blogspot.com || http://kandalaya.org || CC68
It is the mind that moves   || http://schizoid.in   || D17F

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Julien Cigar

Try to raise work_mem

On 12/07/2011 15:34, Raj Mathur (राज माथुर) wrote:

Hi,

I'm trying to correlate Call Data Records (CDRs) from two Asterisk
servers, one of which uses the other for telephony.  The data is in
the tables cdr and cdr2.  With some indexes, the query and explain
result are:

explain analyse select cdr.calldate, cdr2.calldate,
(cdr2.calldate-cdr.calldate) as rtdur, cdr.clid, cdr.dst, cdr2.src,
cdr2.dst, cdr2.dstchannel, cdr2.lastapp, cdr2.duration,
cdr2.disposition from cdr, cdr2 where cdr2.calldate>= cdr.calldate
and cdr.clid=cdr2.clid and cdr.dst=substring(cdr2.dst from 4) order by
cdr.calldate, cdr2.calldate, cdr.clid limit 100;

  QUERY PLAN
-
  Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual 
time=4077.866..4078.054
rows=100 loops=1)
->   Sort  (cost=46782.15..46785.33 rows=1272 width=109) (actual 
time=4077.863..4077.926
rows=100 loops=1)
  Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
  Sort Method:  top-N heapsort  Memory: 42kB
  ->   Merge Join  (cost=2.95..46733.54 rows=1272 width=109) (actual
time=0.070..3799.546 rows=168307 loops=1)
Merge Cond: (((cdr.clid)::text = (cdr2.clid)::text) AND 
((cdr.dst)::text =
"substring"((cdr2.dst)::text, 4)))
Join Filter: (cdr2.calldate>= cdr.calldate)
->   Index Scan using ick1 on cdr  (cost=0.00..34667.86 
rows=208798
width=43) (actual time=0.022..434.246 rows=208798 loops=1)
->   Index Scan using i2k1 on cdr2  (cost=0.00..9960.89 
rows=65449 width=88)
(actual time=0.011..391.599 rows=240981 loops=1)
  Total runtime: 4078.184 ms
(10 rows)

Is there any way to make this query faster?  I already have an index
i2k1 on substring(cdr2.dst from 4), which is being used.

Application
---

I'm looking for all caller records in cdr2 that have the same callerid
(clid) and destination (dst) and were started on cdr2 after they were
started on cdr.  cdr2.dst is the same as cdr.dst but with a
3-character prefix.

Regards,

-- Raj



--
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Raj Mathur (राज माथुर)
On Wednesday 07 Dec 2011, Julien Cigar wrote:
> Try to raise work_mem

Odd, I tried adding work_mem=50MB / 256MB / 1024MB into postgres.conf 
and the times actually went up to over 12 seconds.  Leaving it commented 
results in the 4-second time originally posted.

Regards,

-- Raj

> On 12/07/2011 15:34, Raj Mathur (राज माथुर) wrote:
> > I'm trying to correlate Call Data Records (CDRs) from two Asterisk
> > servers, one of which uses the other for telephony.  The data is in
> > the tables cdr and cdr2.  With some indexes, the query and explain
> > result are:
> > 
> > explain analyse select cdr.calldate, cdr2.calldate,
> > (cdr2.calldate-cdr.calldate) as rtdur, cdr.clid, cdr.dst, cdr2.src,
> > cdr2.dst, cdr2.dstchannel, cdr2.lastapp, cdr2.duration,
> > cdr2.disposition from cdr, cdr2 where cdr2.calldate>= cdr.calldate
> > and cdr.clid=cdr2.clid and cdr.dst=substring(cdr2.dst from 4) order
> > by cdr.calldate, cdr2.calldate, cdr.clid limit 100;
> > 
> >   QUERY 
PLAN
> > 
> > ---
> > ---
> > ---
> > 
> >   Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual
> >   time=4077.866..4078.054
> > 
> > rows=100 loops=1)
> > 
> > ->   Sort  (cost=46782.15..46785.33 rows=1272 width=109)
> > (actual time=4077.863..4077.926
> > 
> > rows=100 loops=1)
> > 
> >   Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
> >   Sort Method:  top-N heapsort  Memory: 42kB
> >   ->   Merge Join  (cost=2.95..46733.54 rows=1272
> >   width=109) (actual
> > 
> > time=0.070..3799.546 rows=168307 loops=1)
> > 
> > Merge Cond: (((cdr.clid)::text = (cdr2.clid)::text)
> > AND ((cdr.dst)::text =
> > 
> > "substring"((cdr2.dst)::text, 4)))
> > 
> > Join Filter: (cdr2.calldate>= cdr.calldate)
> > ->   Index Scan using ick1 on cdr 
> > (cost=0.00..34667.86 rows=208798
> > 
> > width=43) (actual time=0.022..434.246 rows=208798 loops=1)
> > 
> > ->   Index Scan using i2k1 on cdr2 
> > (cost=0.00..9960.89 rows=65449 width=88)
> > 
> > (actual time=0.011..391.599 rows=240981 loops=1)
> > 
> >   Total runtime: 4078.184 ms
> > 
> > (10 rows)
> > 
> > Is there any way to make this query faster?  I already have an
> > index i2k1 on substring(cdr2.dst from 4), which is being used.
> > 
> > Application
> > ---
> > 
> > I'm looking for all caller records in cdr2 that have the same
> > callerid (clid) and destination (dst) and were started on cdr2
> > after they were started on cdr.  cdr2.dst is the same as cdr.dst
> > but with a 3-character prefix.

-- 
Raj Mathur  || r...@kandalaya.org   || GPG:
http://otheronepercent.blogspot.com || http://kandalaya.org || CC68
It is the mind that moves   || http://schizoid.in   || D17F

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Julien Cigar

On 12/07/2011 16:14, Raj Mathur (राज माथुर) wrote:

On Wednesday 07 Dec 2011, Julien Cigar wrote:

Try to raise work_mem


Odd, I tried adding work_mem=50MB / 256MB / 1024MB into postgres.conf
and the times actually went up to over 12 seconds.  Leaving it commented
results in the 4-second time originally posted.


sorry I replied too fast.. There is no external disk merge so increasing 
work_mem is useless. Problem is the "merge join". How big is the table? 
Are the statistics up to date?




Regards,

-- Raj


On 12/07/2011 15:34, Raj Mathur (राज माथुर) wrote:

I'm trying to correlate Call Data Records (CDRs) from two Asterisk
servers, one of which uses the other for telephony.  The data is in
the tables cdr and cdr2.  With some indexes, the query and explain
result are:

explain analyse select cdr.calldate, cdr2.calldate,
(cdr2.calldate-cdr.calldate) as rtdur, cdr.clid, cdr.dst, cdr2.src,
cdr2.dst, cdr2.dstchannel, cdr2.lastapp, cdr2.duration,
cdr2.disposition from cdr, cdr2 where cdr2.calldate>= cdr.calldate
and cdr.clid=cdr2.clid and cdr.dst=substring(cdr2.dst from 4) order
by cdr.calldate, cdr2.calldate, cdr.clid limit 100;

   QUERY

PLAN


---
---
---

   Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual
   time=4077.866..4078.054

rows=100 loops=1)

 ->Sort  (cost=46782.15..46785.33 rows=1272 width=109)
 (actual time=4077.863..4077.926

rows=100 loops=1)

   Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
   Sort Method:  top-N heapsort  Memory: 42kB
   ->Merge Join  (cost=2.95..46733.54 rows=1272
   width=109) (actual

time=0.070..3799.546 rows=168307 loops=1)

 Merge Cond: (((cdr.clid)::text = (cdr2.clid)::text)
 AND ((cdr.dst)::text =

"substring"((cdr2.dst)::text, 4)))

 Join Filter: (cdr2.calldate>= cdr.calldate)
 ->Index Scan using ick1 on cdr
 (cost=0.00..34667.86 rows=208798

width=43) (actual time=0.022..434.246 rows=208798 loops=1)

 ->Index Scan using i2k1 on cdr2
 (cost=0.00..9960.89 rows=65449 width=88)

(actual time=0.011..391.599 rows=240981 loops=1)

   Total runtime: 4078.184 ms

(10 rows)

Is there any way to make this query faster?  I already have an
index i2k1 on substring(cdr2.dst from 4), which is being used.

Application
---

I'm looking for all caller records in cdr2 that have the same
callerid (clid) and destination (dst) and were started on cdr2
after they were started on cdr.  cdr2.dst is the same as cdr.dst
but with a 3-character prefix.





--
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
<>
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SOLVED] Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Raj Mathur (राज माथुर)
On Wednesday 07 Dec 2011, Raj Mathur (राज माथुर) wrote:
> On Wednesday 07 Dec 2011, Julien Cigar wrote:
> > Try to raise work_mem
> 
> Odd, I tried adding work_mem=50MB / 256MB / 1024MB into postgres.conf
> and the times actually went up to over 12 seconds.  Leaving it
> commented results in the 4-second time originally posted.

Seems that the time is constant at around 13 seconds once you go above 
200 records or so, which is acceptable.  Thanks for your help.

Regards,

-- Raj

> > On 12/07/2011 15:34, Raj Mathur (राज माथुर) wrote:
> > > I'm trying to correlate Call Data Records (CDRs) from two
> > > Asterisk servers, one of which uses the other for telephony. 
> > > The data is in the tables cdr and cdr2.  With some indexes, the
> > > query and explain result are:
> > > 
> > > explain analyse select cdr.calldate, cdr2.calldate,
> > > (cdr2.calldate-cdr.calldate) as rtdur, cdr.clid, cdr.dst,
> > > cdr2.src, cdr2.dst, cdr2.dstchannel, cdr2.lastapp,
> > > cdr2.duration,
> > > cdr2.disposition from cdr, cdr2 where cdr2.calldate>=
> > > cdr.calldate and cdr.clid=cdr2.clid and
> > > cdr.dst=substring(cdr2.dst from 4) order by cdr.calldate,
> > > cdr2.calldate, cdr.clid limit 100;
> > > 
> > >   
QUERY
> 
> PLAN
> 
> > > -
> > > --
> > > 
> > > --- ---
> > > 
> > >   Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual
> > >   time=4077.866..4078.054
> > > 
> > > rows=100 loops=1)
> > > 
> > > ->   Sort  (cost=46782.15..46785.33 rows=1272 width=109)
> > > (actual time=4077.863..4077.926
> > > 
> > > rows=100 loops=1)
> > > 
> > >   Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
> > >   Sort Method:  top-N heapsort  Memory: 42kB
> > >   ->   Merge Join  (cost=2.95..46733.54 rows=1272
> > >   width=109) (actual
> > > 
> > > time=0.070..3799.546 rows=168307 loops=1)
> > > 
> > > Merge Cond: (((cdr.clid)::text =
> > > (cdr2.clid)::text) AND ((cdr.dst)::text =
> > > 
> > > "substring"((cdr2.dst)::text, 4)))
> > > 
> > > Join Filter: (cdr2.calldate>= cdr.calldate)
> > > ->   Index Scan using ick1 on cdr
> > > (cost=0.00..34667.86 rows=208798
> > > 
> > > width=43) (actual time=0.022..434.246 rows=208798 loops=1)
> > > 
> > > ->   Index Scan using i2k1 on cdr2
> > > (cost=0.00..9960.89 rows=65449 width=88)
> > > 
> > > (actual time=0.011..391.599 rows=240981 loops=1)
> > > 
> > >   Total runtime: 4078.184 ms
> > > 
> > > (10 rows)
> > > 
> > > Is there any way to make this query faster?  I already have an
> > > index i2k1 on substring(cdr2.dst from 4), which is being used.
> > > 
> > > Application
> > > ---
> > > 
> > > I'm looking for all caller records in cdr2 that have the same
> > > callerid (clid) and destination (dst) and were started on cdr2
> > > after they were started on cdr.  cdr2.dst is the same as cdr.dst
> > > but with a 3-character prefix.


-- 
Raj Mathur  || r...@kandalaya.org   || GPG:
http://otheronepercent.blogspot.com || http://kandalaya.org || CC68
It is the mind that moves   || http://schizoid.in   || D17F

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Brent Dombrowski
On Dec 7, 2011, at 6:34 AM, Raj Mathur (राज माथुर) wrote:

> Hi,
> 
> I'm trying to correlate Call Data Records (CDRs) from two Asterisk
> servers, one of which uses the other for telephony.  The data is in
> the tables cdr and cdr2.  With some indexes, the query and explain
> result are:
> 
> explain analyse select cdr.calldate, cdr2.calldate,
> (cdr2.calldate-cdr.calldate) as rtdur, cdr.clid, cdr.dst, cdr2.src,
> cdr2.dst, cdr2.dstchannel, cdr2.lastapp, cdr2.duration,
> cdr2.disposition from cdr, cdr2 where cdr2.calldate >= cdr.calldate
> and cdr.clid=cdr2.clid and cdr.dst=substring(cdr2.dst from 4) order by
> cdr.calldate, cdr2.calldate, cdr.clid limit 100;
> 
> QUERY PLAN
>   
> -
> Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual 
> time=4077.866..4078.054 
> rows=100 loops=1)
>   ->  Sort  (cost=46782.15..46785.33 rows=1272 width=109) (actual 
> time=4077.863..4077.926 
> rows=100 loops=1)
> Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
> Sort Method:  top-N heapsort  Memory: 42kB
> ->  Merge Join  (cost=2.95..46733.54 rows=1272 width=109) (actual 
> time=0.070..3799.546 rows=168307 loops=1)
>   Merge Cond: (((cdr.clid)::text = (cdr2.clid)::text) AND 
> ((cdr.dst)::text = 
> "substring"((cdr2.dst)::text, 4)))
>   Join Filter: (cdr2.calldate >= cdr.calldate)
>   ->  Index Scan using ick1 on cdr  (cost=0.00..34667.86 
> rows=208798 
> width=43) (actual time=0.022..434.246 rows=208798 loops=1)
>   ->  Index Scan using i2k1 on cdr2  (cost=0.00..9960.89 
> rows=65449 width=88) 
> (actual time=0.011..391.599 rows=240981 loops=1)
> Total runtime: 4078.184 ms
> (10 rows)
> 
> Is there any way to make this query faster?  I already have an index
> i2k1 on substring(cdr2.dst from 4), which is being used.
> 
> Application
> ---
> 
> I'm looking for all caller records in cdr2 that have the same callerid
> (clid) and destination (dst) and were started on cdr2 after they were
> started on cdr.  cdr2.dst is the same as cdr.dst but with a
> 3-character prefix.
> 
> Regards,
> 
> -- Raj
> -- 
> Raj Mathur  || r...@kandalaya.org   || GPG:
> http://otheronepercent.blogspot.com || http://kandalaya.org || CC68
> It is the mind that moves   || http://schizoid.in   || D17F
> 
> -- 
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql


You have a non-equi join in there (cdr2.calldate >= cdr.calldate). I would try 
to get rid of that. It's increasing the number of rows in the result set and 
will only get worse as your data set grows.

Brent.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Raj Mathur (राज माथुर)
On Wednesday 07 Dec 2011, Brent Dombrowski wrote:
> On Dec 7, 2011, at 6:34 AM, Raj Mathur (राज माथुर) wrote:
> > I'm trying to correlate Call Data Records (CDRs) from two Asterisk
> > servers, one of which uses the other for telephony.  The data is in
> > the tables cdr and cdr2.  With some indexes, the query and explain
> > result are:
> > 
> > explain analyse select cdr.calldate, cdr2.calldate,
> > (cdr2.calldate-cdr.calldate) as rtdur, cdr.clid, cdr.dst, cdr2.src,
> > cdr2.dst, cdr2.dstchannel, cdr2.lastapp, cdr2.duration,
> > cdr2.disposition from cdr, cdr2 where cdr2.calldate >= cdr.calldate
> > and cdr.clid=cdr2.clid and cdr.dst=substring(cdr2.dst from 4) order
> > by cdr.calldate, cdr2.calldate, cdr.clid limit 100;
> > 
> > QUERY 
PLAN
> > 
> > ---
> > ---
> > --- Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual
> > time=4077.866..4078.054 rows=100 loops=1)
> > 
> >   ->  Sort  (cost=46782.15..46785.33 rows=1272 width=109) (actual
> >   time=4077.863..4077.926
> > 
> > rows=100 loops=1)
> > 
> > Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
> > Sort Method:  top-N heapsort  Memory: 42kB
> > ->  Merge Join  (cost=2.95..46733.54 rows=1272 width=109)
> > (actual
> > 
> > time=0.070..3799.546 rows=168307 loops=1)
> > 
> >   Merge Cond: (((cdr.clid)::text = (cdr2.clid)::text)
> >   AND ((cdr.dst)::text =
> > 
> > "substring"((cdr2.dst)::text, 4)))
> > 
> >   Join Filter: (cdr2.calldate >= cdr.calldate)
> >   ->  Index Scan using ick1 on cdr 
> >   (cost=0.00..34667.86 rows=208798
> > 
> > width=43) (actual time=0.022..434.246 rows=208798 loops=1)
> > 
> >   ->  Index Scan using i2k1 on cdr2 
> >   (cost=0.00..9960.89 rows=65449 width=88)
> > 
> > (actual time=0.011..391.599 rows=240981 loops=1)
> > Total runtime: 4078.184 ms
> > (10 rows)
> > 
> > Is there any way to make this query faster?  I already have an
> > index i2k1 on substring(cdr2.dst from 4), which is being used.
> 
> You have a non-equi join in there (cdr2.calldate >= cdr.calldate). I
> would try to get rid of that. It's increasing the number of rows in
> the result set and will only get worse as your data set grows.

Seen.  At the moment (and in the foreseeable future) the query is going 
to be run with a condition of the form "cdr.calldate >= 
date_trunc('day', now())", so the set of matches from cdr2 will remain 
relatively constant.

However, you're right, the scope of cdr2 calldates also ought to be time 
limited.  Have added an "and cdr2.calldate < cdr.calldate + interval '1 
day'" to the query.  Thanks.

Regards,

-- Raj
-- 
Raj Mathur  || r...@kandalaya.org   || GPG:
http://otheronepercent.blogspot.com || http://kandalaya.org || CC68
It is the mind that moves   || http://schizoid.in   || D17F

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Scott Marlowe
2011/12/7 Raj Mathur (राज माथुर) :
>                                                             QUERY PLAN
> -
>  Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual 
> time=4077.866..4078.054
> rows=100 loops=1)
>   ->  Sort  (cost=46782.15..46785.33 rows=1272 width=109) (actual 
> time=4077.863..4077.926
> rows=100 loops=1)
>         Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
>         Sort Method:  top-N heapsort  Memory: 42kB
>         ->  Merge Join  (cost=2.95..46733.54 rows=1272 width=109) (actual
> time=0.070..3799.546 rows=168307 loops=1)

Two things to look at here.  First is that the estimation of rows
expected and returned vary by a factor over over 100, which means the
query planner may be making suboptimal choices in terms of the plan it
is running.  If increasing stats target on the target columns in the
query helps, then that's worth trying.  Raise it and re-analyze and
see if you get a closer estimate.  To test if the merge join is the
best choice or not, you can use the set enable_xxx for it (in this
case set enable_mergejoin=off) and then run the query again through
explain analyze and see if the performance gets any better.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Re: Re: [SQL] No response from the backend

2011-12-07 Thread feng.zhou
Thanks for your answer.
The client and server run on the same machine(Windows7). They are connected by 
ODBC.The client accesses server by call ODBC API.Connection and query timeout 
don't be set in the client. 


2011-12-08 



feng.zhou 



发件人: Craig Ringer 
发送时间: 2011-12-07  21:03:46 
收件人: feng.zhou 
抄送: pgsql-sql 
主题: Re: [SQL] No response from the backend 
 
On 12/07/2011 07:27 PM, feng.zhou wrote: 
Hi
There are some questions when using progreSQL8.4.2. I don't sure this problem 
is bug.The detail is following:
I make a client program that communicate with  progreSQL by ODBC on Windows7. 
After running serveral months, this client program is hung at 00:01 in 
2011-11-26 , and become OK after 8 hours.When becoming OK, client program log 
message is following: 

2011/11/26 08:38:05.527,  : SQL ERROR: GetSelectRowNum: No response from the 
backend;
No response from the backend
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: State:42601,Native:26
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: Could not send 
Query(connection dead);
Could not send Query(connection dead)

I don't know how to solve this problem. Someone can explain this problem 
cause.Thanks

I'd say you had a connectivity drop-out, and it only recovered when your TCP/IP 
connection timed out. Reduce your TCP/IP timeouts and/or enable keepalives if 
you have an unreliable network connection between client and server.

If that's not the case, then more detail please. Where do the client and server 
run (same machine?)? How are they connected? etc.

--
Craig Ringer


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Raj Mathur (राज माथुर)
On Thursday 08 Dec 2011, Scott Marlowe wrote:
> 2011/12/7 Raj Mathur (राज माथुर) :
> > QUERY
> > PLAN
> > --
> > ---
> >  Limit  (cost=46782.15..46782.40 rows=100 width=109)
> > (actual time=4077.866..4078.054 rows=100 loops=1)
> >   ->  Sort  (cost=46782.15..46785.33 rows=1272 width=109) (actual
> > time=4077.863..4077.926 rows=100 loops=1)
> > Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
> > Sort Method:  top-N heapsort  Memory: 42kB
> > ->  Merge Join  (cost=2.95..46733.54 rows=1272 width=109)
> > (actual time=0.070..3799.546 rows=168307 loops=1)
> 
> Two things to look at here.  First is that the estimation of rows
> expected and returned vary by a factor over over 100, which means the
> query planner may be making suboptimal choices in terms of the plan
> it is running.  If increasing stats target on the target columns in
> the query helps, then that's worth trying.  Raise it and re-analyze
> and see if you get a closer estimate.  To test if the merge join is
> the best choice or not, you can use the set enable_xxx for it (in
> this case set enable_mergejoin=off) and then run the query again
> through explain analyze and see if the performance gets any better.

Fixed the first -- all it needed was a vacuum analyse, and the performance
improved by 50%.  Enabling/disabling mergejoin doesn't seem to make any
difference to the timing.  However, after the vacuum analyse the planner
is now using:

 Limit  (cost=37499.24..37502.08 rows=1138 width=109) (actual 
time=6355.308..6709.661 
rows=168307 loops=1)
   ->  Sort  (cost=37499.24..37502.08 rows=1138 width=109) (actual 
time=6355.304..6491.595 
rows=168307 loops=1)
 Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
 Sort Method:  quicksort  Memory: 45211kB
 ->  Merge Join  (cost=34720.94..37441.47 rows=1138 width=109) (actual 
time=3438.318..5853.947 rows=168307 loops=1)
   Merge Cond: (((cdr.dst)::text = ("substring"((cdr2.dst)::text, 
4))) AND 
((cdr.clid)::text = (cdr2.clid)::text))
   Join Filter: (cdr2.calldate >= cdr.calldate)
   ->  Sort  (cost=26987.11..27509.10 rows=208798 width=43) (actual 
time=2631.166..2833.926 rows=208748 loops=1)
 Sort Key: cdr.dst, cdr.clid
 Sort Method:  quicksort  Memory: 19696kB
 ->  Seq Scan on cdr  (cost=0.00..8537.98 rows=208798 
width=43) 
(actual time=0.009..211.330 rows=208798 loops=1)
   ->  Sort  (cost=7684.78..7848.41 rows=65449 width=89) (actual 
time=807.031..991.649 rows=240981 loops=1)
 Sort Key: ("substring"((cdr2.dst)::text, 4)), cdr2.clid
 Sort Method:  quicksort  Memory: 9889kB
 ->  Seq Scan on cdr2  (cost=0.00..2449.49 rows=65449 
width=89) 
(actual time=0.021..125.630 rows=65449 loops=1)
 Total runtime: 6823.029 ms

Can you see any place here where adding indexes may help?

Regards,

-- Raj
-- 
Raj Mathur  || r...@kandalaya.org   || GPG:
http://otheronepercent.blogspot.com || http://kandalaya.org || CC68
It is the mind that moves   || http://schizoid.in   || D17F

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] No response from the backend

2011-12-07 Thread Craig Ringer

On 12/08/2011 09:38 AM, feng.zhou wrote:

Thanks for your answer.
The client and server run on the same machine(Windows7). They are 
connected by ODBC.The client accesses server by call ODBC 
API.Connection and query timeout don't be set in the client.




Since client and server are on the same host, it's unlikely to be 
network connectivity. Check the server logs.


--
Craig Ringer


[SQL] Re: Re: [SQL] No response from the backend

2011-12-07 Thread feng.zhou
Thanks, I will try it


2011-12-08 



feng.zhou 



发件人: Craig Ringer 
发送时间: 2011-12-08  11:49:33 
收件人: feng.zhou 
抄送: pgsql-sql 
主题: Re: [SQL] No response from the backend 
 
On 12/08/2011 09:38 AM, feng.zhou wrote: 
Thanks for your answer.
The client and server run on the same machine(Windows7). They are connected by 
ODBC.The client accesses server by call ODBC API.Connection and query timeout 
don't be set in the client. 



Since client and server are on the same host, it's unlikely to be network 
connectivity. Check the server logs.

--
Craig Ringer


[SQL] prepared statements

2011-12-07 Thread Vad N
Hi.

How can i pass a set of values to prepared statement?

example
I have a prepared query:
select * from users in ( $1 )

i would like to pass: 1,2,3,4 and get:

select * from users in ( 1,2,3,4 )

Any ideas?


[SQL] Question on imports with foreign keys

2011-12-07 Thread Andreas

Hi,

suppose you need to import a csv with standard ciolums like name, 
adress, phone, ... and some additional text columns that need to be 
split off into referenced tables.


Those lookup-tables will only be needed for a project with limited life 
time so I create a schema that might be called "project_x". There I 
create the necessary lookup tables.


The core of the import will be added to the customers table with 
unlimited livespan. The customers table has a PKey id which is a serial.


I don't want to add FKey columns into customers for the new 
lookup-tables so I create another table in project_x "projectinfos" that 
stores those FKeys and another FKey that references customers.id.


First question: Is this a stupid aproach?

If not:
How is the easiest way to to find the customer.id of the new customers 
so I can insert the projectinfos?


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] prepared statements

2011-12-07 Thread Pavel Stehule
Hello

2011/12/8 Vad N :
>
> Hi.
>
> How can i pass a set of values to prepared statement?
>
> example
> I have a prepared query:
> select * from users in ( $1 )
>
> i would like to pass: 1,2,3,4 and get:
>
> select * from users in ( 1,2,3,4 )
>
> Any ideas?

use a array parameter, please

regards

Pavel Stehule

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql