Re: [GENERAL] new databases using a template.

2011-03-07 Thread Guillaume Lelarge
Le 01/03/2011 07:42, Malm Paul a écrit :
 Hi,
 I'm trying to create a new database by using a template database. But it is 
 not possible. The error code is that some one is using the template, but no 
 one is using it.

I would bet *you* are connected with pgadmin to the template1 database.

 I'm using PgAdmin III ver 1.1.0. Has some one seen something like this?

I really hope you're not using 1.1.0. I've never seen this release and,
according to its number, it may be 6 years old :)

You probably meant 1.10.0.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump slow with bytea data

2011-03-07 Thread chris r.
Merlin, first of all, thanks for your reply!

 hm.  where exactly is all this time getting spent?  Are you i/o bound?
 cpu bound? Is there any compression going on?
Very good questions. pg_dump -F c compresses per default at a moderate
level (manpage), whatever compression level 'moderate' actually means.
Thus, yes, without explicitly activating it, we use compression.

For testing, I inserted a fraction of our huge table with bytea content
to the table 'testtable'. The next three outputs compare pg_dump for
this table with default compression level, no compression and low-level
compression on level 3. The time spent seems CPU-bound, as in the first
test case 90-100% of a CPU-core is used all over the time.


(default compression)
time pg_dump -f /tmp/test.sql -F c -t testtable mydb
real0m27.255s
user0m26.383s
sys 0m0.180s

(low-level compression)
time pg_dump -f /tmp/test.sql -F c -Z 3 -t testtable mydb
real0m8.883s
user0m8.112s
sys 0m0.161s

(no compression)
time pg_dump -f /tmp/test.sql -F c -Z 0 -t testtable mydb
real0m1.892s
user0m0.074s
sys 0m0.279s

To summarize, in our case-scenario, moderate-level compression caused a
speed-loss of factor 14.



In another test, I'll compare pg_dump of a table with textual content
that I created stupidly with:

 select (t/23.0)::text||(t/17.0)::text
 into testtable
 from generate_series(1, 100) t;

Very much to my surprise, dumping this table did not show such a huge
difference when using compression: a default-compressed pg_dump took
2.4s, whereas a non-compressed pg_dump took 2.0s (which is merely factor
1.2x). However, when expanding the series to 3 mio (instead of 1 mio),
the compressed pg_dump took 7.0s, whereas a non-compressed pg_dump ran
for 2.4s only (factor 3x). Does this show that compression takes
relatively longer the more data it needs to compress? Memory consumption
was less than 12 MB during testing.


 Maybe this is a
 performance issue inside pg_dump itself, not necessarily a text/binary
 issue (i have a hard time believing going from b64-hex is 10x slower
 on format basis alone).  Can you post times comparing manual COPY via
 text, manual COPY via binary, and pg_dump -F c?
Again, valid points. As a next step, I'll compare the COPY variants.

time psql mydb -c COPY testtable TO '/tmp/test.sql' WITH (FORMAT 'text');
real0m1.712s
user0m0.001s
sys 0m0.004s

In text format, the time it takes to COPY testtable seems very much like
the time it takes to run pg_dump without compression.

Interestingly, COPYing testtable with binary format gives another factor
3.6x speedup:

time psql mydb -c COPY testtable TO '/tmp/test.sql' WITH (FORMAT
'binary');
real0m0.470s
user0m0.000s
sys 0m0.005s

As one may argue the table was too small to compare the runtime, I
repeated this second comparison with two larger tables - both times
showing between 5x-6x speedup with binary format! In either format the
operation seemed CPU bound ( 95% of a core was taken).



To summarize, I could speed up my backup by removing compression (factor
14) and using COPY in binary format instead of pg_dump (factor 5 to
factor 6). However, only the first option would keep data integrity. To
have an easy integrity-save backup, IMHO, the second option can only be
achieved by having an additional switch in pg_dump allowing for binary
output.


Any comments on these measurements? Thanks again for your input!

Regards,
Chris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Durumdara
Hi!

In other RDBMS I found a way to make dynamic statements.
I can use variables, or concat the SQL segments, and execute it all.

:tablename = call CreateTempTable;
insert into :tablename 
drop table :tablename

or (FireBird like cursor handling):

sql = select * from  || :tablename ||  where...
for select :sql 
...

Can I do same thing in PGSQL too?

Thanks:
   dd


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Vibhor Kumar

On Mar 7, 2011, at 8:02 PM, Durumdara wrote:

 Hi!
 
 In other RDBMS I found a way to make dynamic statements.
 I can use variables, or concat the SQL segments, and execute it all.
 
 :tablename = call CreateTempTable; 
 insert into :tablename  
 drop table :tablename 
 
 or (FireBird like cursor handling): 
 
 sql = select * from  || :tablename ||  where... 
 for select :sql  
 ...
 
 Can I do same thing in PGSQL too?
 
 Thanks:
dd

You can use EXECUTE dynamic Command of plgpsql:
http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Adrian Klaver
On Monday, March 07, 2011 6:32:44 am Durumdara wrote:
 Hi!
 
 In other RDBMS I found a way to make dynamic statements.
 I can use variables, or concat the SQL segments, and execute it all.
 
 :tablename = call CreateTempTable;
 
 insert into :tablename 
 drop table :tablename
 
 or (FireBird like cursor handling):
 
 sql = select * from  || :tablename ||  where...
 for select :sql 
 ...
 
 Can I do same thing in PGSQL too?
 
 Thanks:
dd

http://www.postgresql.org/docs/9.0/interactive/plpgsql-statements.html#PLPGSQL-
STATEMENTS-EXECUTING-DYN
-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Durumdara
Hi!

Thanks!

How do I create cursor or for select in PGSQL with dynamic way?

For example

:tbl = GenTempTableName()
insert into :tbl...
insert into :tbl...
insert into :tbl...

for select :part_id from :tbl begin
exec 'select count(*) from subitems where id = ?' using :part_id into
:sumof
update :tbl set sumof = :sumof where part_id=:part_id
end;

Can you show me same example?

Thanks:
dd

2011/3/7 Adrian Klaver adrian.kla...@gmail.com

 On Monday, March 07, 2011 6:32:44 am Durumdara wrote:
  Hi!
 
  In other RDBMS I found a way to make dynamic statements.
  I can use variables, or concat the SQL segments, and execute it all.
 
  :tablename = call CreateTempTable;
 
  insert into :tablename 
  drop table :tablename
 
  or (FireBird like cursor handling):
 
  sql = select * from  || :tablename ||  where...
  for select :sql 
  ...
 
  Can I do same thing in PGSQL too?
 
  Thanks:
 dd


 http://www.postgresql.org/docs/9.0/interactive/plpgsql-statements.html#PLPGSQL-
 STATEMENTS-EXECUTING-DYN
 --
 Adrian Klaver
 adrian.kla...@gmail.com



Re: [GENERAL] pg_dump slow with bytea data

2011-03-07 Thread Merlin Moncure
On Mon, Mar 7, 2011 at 7:28 AM, chris r. chri...@gmx.net wrote:
 Merlin, first of all, thanks for your reply!

 hm.  where exactly is all this time getting spent?  Are you i/o bound?
 cpu bound? Is there any compression going on?
 Very good questions. pg_dump -F c compresses per default at a moderate
 level (manpage), whatever compression level 'moderate' actually means.
 Thus, yes, without explicitly activating it, we use compression.

 For testing, I inserted a fraction of our huge table with bytea content
 to the table 'testtable'. The next three outputs compare pg_dump for
 this table with default compression level, no compression and low-level
 compression on level 3. The time spent seems CPU-bound, as in the first
 test case 90-100% of a CPU-core is used all over the time.


 (default compression)
 time pg_dump -f /tmp/test.sql -F c -t testtable mydb
 real    0m27.255s
 user    0m26.383s
 sys     0m0.180s

 (low-level compression)
 time pg_dump -f /tmp/test.sql -F c -Z 3 -t testtable mydb
 real    0m8.883s
 user    0m8.112s
 sys     0m0.161s

 (no compression)
 time pg_dump -f /tmp/test.sql -F c -Z 0 -t testtable mydb
 real    0m1.892s
 user    0m0.074s
 sys     0m0.279s

 To summarize, in our case-scenario, moderate-level compression caused a
 speed-loss of factor 14.

right -- well in the short term it looks like you should consider
lowering or disabling compression.

 In another test, I'll compare pg_dump of a table with textual content
 that I created stupidly with:

  select (t/23.0)::text||(t/17.0)::text
  into testtable
  from generate_series(1, 100) t;

 Very much to my surprise, dumping this table did not show such a huge
 difference when using compression: a default-compressed pg_dump took
 2.4s, whereas a non-compressed pg_dump took 2.0s (which is merely factor
 1.2x). However, when expanding the series to 3 mio (instead of 1 mio),
 the compressed pg_dump took 7.0s, whereas a non-compressed pg_dump ran
 for 2.4s only (factor 3x). Does this show that compression takes
 relatively longer the more data it needs to compress? Memory consumption
 was less than 12 MB during testing.

Most compression algs don't use a lot of memory.  Also, as a general
rule of thumb low entropy data compresses must faster than high
entropy data so you can't really compare synthetic tests like that to
real world data as you discovered.

Unfortunately, compression is something of a weak point for the
postgres project: there are much better bang/buck ratio algorithms out
there that we can't use because of licensing or patent concerns.
There are a lot of easy workarounds though (like rigging command line
compressor post dump) so it isn't really a big deal for backups.  You
may want to investigate if your bytea columns are being toast
compressed and look there if you are having performance issues.

 Maybe this is a
 performance issue inside pg_dump itself, not necessarily a text/binary
 issue (i have a hard time believing going from b64-hex is 10x slower
 on format basis alone).  Can you post times comparing manual COPY via
 text, manual COPY via binary, and pg_dump -F c?
 Again, valid points. As a next step, I'll compare the COPY variants.

 time psql mydb -c COPY testtable TO '/tmp/test.sql' WITH (FORMAT 'text');
 real    0m1.712s
 user    0m0.001s
 sys     0m0.004s

 In text format, the time it takes to COPY testtable seems very much like
 the time it takes to run pg_dump without compression.

 Interestingly, COPYing testtable with binary format gives another factor
 3.6x speedup:

 time psql mydb -c COPY testtable TO '/tmp/test.sql' WITH (FORMAT
 'binary');
 real    0m0.470s
 user    0m0.000s
 sys     0m0.005s

 As one may argue the table was too small to compare the runtime, I
 repeated this second comparison with two larger tables - both times
 showing between 5x-6x speedup with binary format! In either format the
 operation seemed CPU bound ( 95% of a core was taken).



 To summarize, I could speed up my backup by removing compression (factor
 14) and using COPY in binary format instead of pg_dump (factor 5 to
 factor 6). However, only the first option would keep data integrity. To
 have an easy integrity-save backup, IMHO, the second option can only be
 achieved by having an additional switch in pg_dump allowing for binary
 output.

Well, that's a pretty telling case, although I'd venture to say not
typical.  In average databases, I'd expect 10-50% range of improvement
going from text-binary which is often not enough to justify the
compatibility issues.  Does it justify a 'binary' switch to pg_dump?
I'd say so -- as long as the changes required aren't to extensive
(although you can expect disagreement on that point).  hm. i'll take a
look...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Adrian Klaver
On Monday, March 07, 2011 6:45:11 am Durumdara wrote:
 Hi!
 
 Thanks!
 
 How do I create cursor or for select in PGSQL with dynamic way?
 
 For example
 
 :tbl = GenTempTableName()
 
 insert into :tbl...
 insert into :tbl...
 insert into :tbl...
 
 for select :part_id from :tbl begin
 exec 'select count(*) from subitems where id = ?' using :part_id into
 
 :sumof
 
 update :tbl set sumof = :sumof where part_id=:part_id
 end;
 
 Can you show me same example?

There are examples in the docs at the link provided. Though I would suggest 
reading the pl/pgsql documentation from the beginning to get an idea of its 
structure. 

 
 Thanks:
 dd

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Logic AND between some strings

2011-03-07 Thread yagru_alvarez
I want to make a Logic AND between some strings of 0s and 1s .. Here
you have an example:

1- 01100010
2- 1100

I wanto to make a LOGIC AND between 01100010 and 1100.

I' m working with C++, I need some code to have an idea about how I
can perform that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web Hosting

2011-03-07 Thread matty jones
Thanks for the thoughts everyone.  I am looking at rimuhosting right now.  I
looked at godaddy and while they do allow me to install stuff and there
prices are very reasonable, I am a firm believer in you get what you pay
for.  I think it would be better for me to spend 30-40 a month then 5-10 a
month for hosting.  The reviews I have read so far about rimuhosting seem on
the whole very positive and looking over their packages and the os/hardware
packages I am quite happy.  My project is nearing the point where I need it
hosted separately not just locally on my box.  It is still a long way from
complete but it is getting to the point where I need a permanent structure,
that I don't have to mimic across 2 or 3 development boxes.  I am planning
on releasing the project under some sort of open source license so the
discount idea sounds good to me.

On Sun, Mar 6, 2011 at 3:11 AM, Brent Wood b.w...@niwa.co.nz wrote:

 Rimu hosting allows you to install whatever you want, including Postgres...
 which I have done before now. If your project is in support of Open Source
 software in any way, ask what discount they can offer, they have been pretty
 generous in that arena.

 http://rimuhosting.com/

 Like many hosting companies, they allow you to install  run Postgres, but
 do not provide support for it. Although given the technical competencies of
 their support staff, you may find one of them will be able to help anyway.

 HTH,

   Brent Wood




 Brent Wood
 DBA/GIS consultant
 NIWA, Wellington
 New Zealand

  Uwe Schroeder u...@oss4u.com 03/06/11 7:05 PM 

 Godaddy virtual hosting does in fact support postgresql. You have a root
 account on the virtual server and you can install whatever you want.

 I run several servers with them and all have postgresql, some virtual, some

 dedicated servers.

 Haven't tried their shared servers though, so I can't say anything about
 those.

 Hope that helps.

 PS: for a company that size their customer support isn't too shabby either.

 Uwe


  Gentlemen-
 
  Go-daddy *claims* to support postgres
  http://help.godaddy.com/article/2330
 
  YMMV
  Martin--
  __
  Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
  Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene
  Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede
 unbefugte
  Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht
  dient lediglich dem Austausch von Informationen und entfaltet keine
  rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von
  E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen. Ce message
  est confidentiel et peut être privilégié. Si vous n'êtes pas le
  destinataire prévu, nous te demandons avec bonté que pour satisfaire
  informez l'expéditeur. N'importe quelle diffusion non autorisée ou la
  copie de ceci est interdite. Ce message sert à l'information seulement et
  n'aura pas n'importe quel effet légalement obligatoire. Étant donné que
  les email peuvent facilement être sujets à la manipulation, nous ne
  pouvons accepter aucune responsabilité pour le contenu fourni.
 
   Date: Sat, 5 Mar 2011 16:40:57 -0800
   Subject: Re: [GENERAL] Web Hosting
   From: m...@kitchenpc.com
   To: urlu...@gmail.com
   CC: pgsql-general@postgresql.org
  
   On Sat, Mar 5, 2011 at 1:08 PM, matty jones urlu...@gmail.com wrote:
I already have a domain name but I am looking for a hosting company
that I can use PG with. The few I have contacted have said that they
support MySQL only and won't give me access to install what I need or
they want way to much. I don't need a dedicated host which so far
seems the only way this will work, all the companies I have
 researched
so far that offer shared hosting or virtual hosting only use MySQL. I
will take care of the setup and everything myself but I have already
written my code using PG/PHP and I have no intention of switching.
Thanks.
  
   Well there's this list:
  
   http://www.postgresql.org/support/professional_hosting
  
   Also, maybe something like Amazon EC2 if you want your own box? I
   think the small instances are even free..
  
   Mike




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

  Please consider the environment before printing this email.

 NIWA is the trading name of the National Institute of Water  Atmospheric
 Research Ltd.



[GENERAL] Logic AND between some strings

2011-03-07 Thread yagru_alvarez
I want to make a Logic AND between some strings of 0s and 1s .. Here
you have an example:

1- 01100010
2- 1100

I wanto to make a LOGIC AND between 01100010 and 1100.

I' m working with C++, I need some code to have an idea about how I
can perform that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Logic AND between some strings

2011-03-07 Thread yagru_alvarez
I want to make a Logic AND between some STRINGS of 0s and 1s .. Here
you have an example:

1- 01100010
2- 1100

I want to make a LOGIC AND between 01100010 and 1100.

I' m working with C++, I need some code to have an idea about how I
can perform that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web Hosting

2011-03-07 Thread Ogden

On Mar 6, 2011, at 2:11 AM, Brent Wood wrote:

 Rimu hosting allows you to install whatever you want, including Postgres... 
 which I have done before now. If your project is in support of Open Source 
 software in any way, ask what discount they can offer, they have been pretty 
 generous in that arena.
 
 http://rimuhosting.com/



Something similar is http://www.linode.com/ and even http://www.slicehost.com. 
I personally have not used either but linode comes with great recommendations 
from friends. 

Ogden

Re: [GENERAL] pg_dump slow with bytea data

2011-03-07 Thread Merlin Moncure
On Mon, Mar 7, 2011 at 8:52 AM, Merlin Moncure mmonc...@gmail.com wrote:
 Well, that's a pretty telling case, although I'd venture to say not
 typical.  In average databases, I'd expect 10-50% range of improvement
 going from text-binary which is often not enough to justify the
 compatibility issues.  Does it justify a 'binary' switch to pg_dump?
 I'd say so -- as long as the changes required aren't to extensive
 (although you can expect disagreement on that point).  hm. i'll take a
 look...

The changes don't look too bad, but are not trivial.

On the backup side, it just does a text/binary agnostic copy direct to
stdout.  You'd need to create a switch of course, and I'm assuming add
a flag isbinary to ArchiveHandle and possibly a stream length to the
tocEntry for each table (or should this just be header to the binary
stream?).  On the restore side it's a bit more complicated -- the
current code is a completely text monster, grepping each line for
unquoted newline, assuming ascii '0' is the end of the data, etc.  You
would need a completely separate code path for binary, but it would be
much smaller and simpler (and faster!).  There might be some other
issues too, I just did a cursory scan of the code.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logic AND between some strings

2011-03-07 Thread Vick Khera
On Mon, Mar 7, 2011 at 10:14 AM, yagru_alvarez
jmalva...@estudiantes.uci.cu wrote:
 I wanto to make a LOGIC AND between 01100010 and 1100.

 I' m working with C++, I need some code to have an idea about how I
 can perform that.


You want to do this in C++ or in SQL?  In SQL it looks like this:

select b'01100010'  b'1100';

If you want to do this in C++, ask your teacher for help with your homework.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] First production install - general advice

2011-03-07 Thread runner

 

 I'm going to go live with my first production install of PostgreSQL 9.0 in 
about a week.  I've done a LOT of reading on the internet and I've purchased 
two very good reference books and actually read them:

PostgreSQL 9 Administration Cookbook
PostgreSQL 9.0 High Performance

I'd like to know if any of you have ever installed a PostgreSQL database for 
production use and then found something you wish you had done differently after 
the fact.   Maybe your directory naming scheme, your backup strategy, 
environment variable settings etc.  In this last week before we go live I'm 
hoping to get a few last minute tidbits of information that me help me avoid 
some common problems.

Thank you,
Rick




Re: [GENERAL] First production install - general advice

2011-03-07 Thread Ray Stell
On Mon, Mar 07, 2011 at 12:34:19PM -0500, runner wrote:
 
 I'd like to know if any of you have ever installed a PostgreSQL database for 
 production use and then found something you wish you had done differently 
 after the fact.   


Test and document your disaster recovery plan.  You don't want be trying
to figure it out when you need it.  It is what gets left in the scurry
very often.   I'm pretty sure mine is dusty and I regret that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Understanding of LOCK and pg_sleep interaction

2011-03-07 Thread David Johnston
Hi,

 

In trying to setup a test for a LOCK 'table' algorithm I attempt to execute
two transactions where the first one issues a pg_sleep(10) while 'table' is
locked and the second one attempts LOCK 'table' during the time when the
pg_sleep is executing.  When pg_sleep() returns in the first transaction the
subsequent statement is not executed.  Meanwhile, the second transaction
continues to wait for the lock.  Thus, a deadlock has occurred.  I am doing
my testing within PostGreSQL Maestro running as a script and issuing BEGIN
and COMMIT statements around the desired transaction commands.

 

I would expect the first transaction to finish following the 10 second sleep
at which point the first transaction would be able to start.

 

PostgreSQL 9.0.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real
(Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit

 

Either script run alone works just fine - it is just when run in tandem as
described is neither able to complete.

 

What am I doing/understanding incorrectly or is this undesirable behavior?

 

Thanks,

 

David J.

 

===

--Transaction 1

begin;

 

delete from locktest;

 

LOCK locktest;

 

INSERT INTO locktest (scope, value) VALUES ('TEST','1');

INSERT INTO locktest (scope, value) VALUES ('TEST','2');

 

select pg_sleep(10);

 

rollback; --or commit [This doesn't execute if I begin transaction 2]

pg_stat_activity

IDLE in transaction

==

--Transaction 2

begin;

 

LOCK locktest; --[This never completes if executed during pg_sleep(10)]

 

INSERT INTO locktest (scope, value) VALUES ('TEST','3');

 

commit;

pg_stat_activity

LOCK locktest

===

 

Attempt at pg_lock results; executed AFTER the 10 second pg_sleep
returned.

locktype   database relationpage
tuple virtualxid transactionid  classid   objid
objsubidvirtualtransaction pid  mode
granted

transactionid
101091  15/359  13752
ExclusiveLock True

relation623943  853698
15/359  13752RowExclusiveLockTrue

relation623943  853698
15/359  13752AccessExclusiveLockTrue

relation623943  10985
18/153  13770AccessShareLockTrue

relation623943  853696
15/359  13752AccessShareLockTrue

virtualxid
18/153
18/153  13770ExclusiveLock True

virtualxid
15/359
15/359  13752ExclusiveLock True

relation623943  853702
15/359  13752RowExclusiveLockTrue

virtualxid
17/438
17/438  13754ExclusiveLock True

relation623943  853698
17/438  13754AccessExclusiveLockFalse



Re: [GENERAL] Web Hosting

2011-03-07 Thread Benjamin Smith
Try this: 

http://lmgtfy.com/?q=web+hosting+postgresql



On Sunday, March 06, 2011 11:33:01 am Eduardo wrote:
 At 17:24 06/03/2011, you wrote:
 On 3/5/2011 4:08 PM, matty jones wrote:
 I already have a domain name but I am looking for a hosting company
 that I can use PG with.  The few I have contacted have said that
 they support MySQL only and won't give me access to install what I
 need or they want way to much.  I don't need a dedicated host which
 so far seems the only way this will work, all the companies I have
 researched so far that offer shared hosting or virtual hosting only
 use MySQL.  I will take care of the setup and everything myself but
 I have already written my code using PG/PHP and I have no intention
 of switching.
 
 Thanks.
 
 http://hub.org/
 
 +1

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [GENERAL] Web Hosting

2011-03-07 Thread Matt
Thanks, but I tried that originally and the companies that come up have
either poor ratings, won't support postgres, won't allow me the freedom to
run my own software, or after talking with them I realized there was PEBKAC
issues with there support staff.  I also, as stated earlier, won't go with
the cheap, low end companies due to common sense issues such as server load,
service/support issues, and cheap hardware.

On Mon, Mar 7, 2011 at 1:00 PM, Benjamin Smith li...@benjamindsmith.comwrote:

  Try this:

 http://lmgtfy.com/?q=web+hosting+postgresql

  On Sunday, March 06, 2011 11:33:01 am Eduardo wrote:

  At 17:24 06/03/2011, you wrote:

  On 3/5/2011 4:08 PM, matty jones wrote:

  I already have a domain name but I am looking for a hosting company

  that I can use PG with. The few I have contacted have said that

  they support MySQL only and won't give me access to install what I

  need or they want way to much. I don't need a dedicated host which

  so far seems the only way this will work, all the companies I have

  researched so far that offer shared hosting or virtual hosting only

  use MySQL. I will take care of the setup and everything myself but

  I have already written my code using PG/PHP and I have no intention

  of switching.

  

  Thanks.

  

  http://hub.org/

 

  +1


 --
 This message has been scanned for viruses and
 dangerous content by *MailScanner* http://www.mailscanner.info/, and is
 believed to be clean.


Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Merlin Moncure
On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard gl...@zewt.org wrote:
 On Sun, Mar 6, 2011 at 5:41 AM, Martijn van Oosterhout klep...@svana.org
 wrote:

 If it's really really important there are ways you can use trigger
 tables and summary views to achieve the results you want. Except it's
 expensive and when people are told that all of the sudden the count(*)
 performance isn't so important any more. :)

 That's often perfectly fine, with read-heavy, single-writer workloads.

 I definitely wish there was a way to create indexes to track counters on
 various types of queries, even if it eliminates write concurrency on
 affected writes.  Doing it by hand is a pain.

beyond what the stats system does you mean?

If you aren't interested in high concurrency count it really isn't all
that difficult -- just push table modifying queries into a procedure
and grab rows affected.  Row level trigger can also do it but
performance will suck unless you are already doing all row by row
processing (in which case your performance already sucks).

The way to do this in with high concurrency is like the above, but
insert (not update) rows affected into a table modification log that
is rolled up on time interval or user demand so you don't serialize
access w/every statement.  Or you dispense with all the fuss and grab
fee'n'easy approximate count from the stats system which is really
what people want 99% of the time.

In the old days this was much more complicated problem because to eek
every bit of oltp performance out of the server you had to disable the
stats collector. Today you don't, so let it do your work for you.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding of LOCK and pg_sleep interaction

2011-03-07 Thread Tom Lane
David Johnston pol...@yahoo.com writes:
 In trying to setup a test for a LOCK 'table' algorithm I attempt to execute
 two transactions where the first one issues a pg_sleep(10) while 'table' is
 locked and the second one attempts LOCK 'table' during the time when the
 pg_sleep is executing.  When pg_sleep() returns in the first transaction the
 subsequent statement is not executed.  Meanwhile, the second transaction
 continues to wait for the lock.  Thus, a deadlock has occurred.  I am doing
 my testing within PostGreSQL Maestro running as a script and issuing BEGIN
 and COMMIT statements around the desired transaction commands.

I don't know anything about PostGreSQL Maestro, but what it sounds like
from this description is that it's not committing the transaction right
away when the script finishes.  You might try turning on log_statement
on the server side so you can see exactly what commands are being sent
and when.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unexpected EOF on client connection vs 9.0.3

2011-03-07 Thread Piotr Czekalski

Hello there,

I'm getting regular Postgres log entries with the following error:
2011-03-07 01:00:01 CET LOG:  could not receive data from client: No 
connection could be made because the target machine actively refused it.

2011-03-07 01:00:01 CET LOG:  unexpected EOF on client connection

They always appear together.
The server is PostgreSQL 9.0.3, compiled by Visual C++ build 1500, 
64-bit running on Windows 2008 R2 SP1 x64, the client is a ASP.NET 
application running npgsql.


I've checked and verified that all connections are closed within the 
code, what's more, the problem has appeared just as I've moved server 
from Fedora Linux x86_64 running Postgres 8.4.2 to the Windows and 9.0.3 
(details above) thus I conclude this is not a client problem indeed (the 
failure didn't occure on Linux).
The connections are closed from time to time immediatelly as I get 
ASP.NET errors.


Thanks for any help in advance.

Piotr Czekalski

P.S. The connection string is:
...Server=127.0.0.1;Port=5432;User Id=user here;Password=password 
here;Database=database here;CommandTimeout=360; providerName=Npgsql...

--

--
TECHBAZA.PL Sp. z o.o.
Technologie WEB, eDB  eCommerce
tel. (+4832) 7186081
fax. (+4832) 7003289
email: bi...@techbaza.pl



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding of LOCK and pg_sleep interaction

2011-03-07 Thread David Johnston
OK, so I try the same scripts with pgAdminIII and they work as expected.

Sorry for the noise.

David J.


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Monday, March 07, 2011 1:20 PM
To: David Johnston
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Understanding of LOCK and pg_sleep interaction 

David Johnston pol...@yahoo.com writes:
 In trying to setup a test for a LOCK 'table' algorithm I attempt to 
 execute two transactions where the first one issues a pg_sleep(10) 
 while 'table' is locked and the second one attempts LOCK 'table' 
 during the time when the pg_sleep is executing.  When pg_sleep() 
 returns in the first transaction the subsequent statement is not 
 executed.  Meanwhile, the second transaction continues to wait for the 
 lock.  Thus, a deadlock has occurred.  I am doing my testing within 
 PostGreSQL Maestro running as a script and issuing BEGIN and COMMIT
statements around the desired transaction commands.

I don't know anything about PostGreSQL Maestro, but what it sounds like from
this description is that it's not committing the transaction right away when
the script finishes.  You might try turning on log_statement on the server
side so you can see exactly what commands are being sent and when.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logic AND between some strings

2011-03-07 Thread Bill Thoen
Just to make sure, you're asking for the logical AND, not the bitwise 
AND? In other words you're not talking about getting into bit shifting 
with  and  and masking with ?


For the logical AND, you need to use expressions that evaluate to TRUE 
or FALSE, and follow the rules in this truth table:


Expr 1Expr 2 AND returns
truetrue true
truefalsefalse
false   true false
false   falsefalse

like 01100010 == 1100 AND 01100010  1100 would evaluate to 
false AND true (depending on how you interpret these bitstrings), and so 
that evaluates to false. If you're dealing with bitstrings (text made up 
of only zeros and ones), then I'd implement this by just using text 
comparisions, since bitstrings are just text and have the same 
comparision behavior as the unsigned integers  they could represent.






On 3/7/2011 7:25 AM, yagru_alvarez wrote:

I want to make a Logic AND between some strings of 0s and 1s .. Here
you have an example:

1-  01100010
2-  1100

I wanto to make a LOGIC AND between 01100010 and 1100.

I' m working with C++, I need some code to have an idea about how I
can perform that.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Scott Marlowe
On Sun, Mar 6, 2011 at 3:41 AM, Martijn van Oosterhout
klep...@svana.org wrote:
 The other option is visibility data in the index. Doubles the size of
 your indexes though.

Also requires both table and index be locked while you update both so
you don't get race conditions.  so has a real performance impact there
as well.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web Hosting

2011-03-07 Thread Aleksey Tsalolikhin
On Sat, Mar 5, 2011 at 5:12 PM, Ogden li...@darkstatic.com wrote:

 On Mar 5, 2011, at 7:07 PM, Bret Fledderjohn wrote:

  I am using A2 Hosting (www.a2hosting.com ) which offers 8.4...  They are
 inexpensive and so far reliable.


 Wow, that's super cheap. Is there some catch - $5.57 / month for unlimited
 everything? No hidden anything?
 Ogden

I use A2 Hosting as well, they've been very good value for money.   My
site's been
down like a couple of times a year, but the support's been great, and they do
support a lot of different softwares.  Thumbs up.

Best,
-at

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Glenn Maynard
On Mon, Mar 7, 2011 at 1:13 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard gl...@zewt.org wrote:
  That's often perfectly fine, with read-heavy, single-writer workloads.
 
  I definitely wish there was a way to create indexes to track counters on
  various types of queries, even if it eliminates write concurrency on
  affected writes.  Doing it by hand is a pain.

 beyond what the stats system does you mean?


The stats system only helps for the most basic case--counting the number of
rows in a table.  In my experience that's not very common; most of the time
it's counting total results from some more interesting query, eg. for
pagination.  In my particular case, I'm caching results for SELECT COUNT(*),
expr2 FROM table WHERE expr GROUP BY expr2 (for a very limited set of
expressions).

If you aren't interested in high concurrency count it really isn't all
 that difficult -- just push table modifying queries into a procedure
 and grab rows affected.  Row level trigger can also do it but
 performance will suck unless you are already doing all row by row
 processing (in which case your performance already sucks).


Row triggers are fast enough for my case--it's a read-heavy workload, so
it's okay to take a bit more time inserting new data.  It's easier to ensure
consistency with row triggers, since they can be tested independently of
anything modifying the table.

-- 
Glenn Maynard


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Bill Thoen

On 3/7/2011 7:55 AM, Adrian Klaver wrote:

On Monday, March 07, 2011 6:45:11 am Durumdara wrote:

Hi!

Thanks!

How do I create cursor or for select in PGSQL with dynamic way?

For example

:tbl = GenTempTableName()

insert into :tbl...
insert into :tbl...
insert into :tbl...

for select :part_id from :tbl begin
 exec 'select count(*) from subitems where id = ?' using :part_id into

:sumof

 update :tbl set sumof = :sumof where part_id=:part_id
end;

Can you show me same example?

There are examples in the docs at the link provided. Though I would suggest
reading the pl/pgsql documentation from the beginning to get an idea of its
structure.
You won't find this easy. I've spent an awful lot of time the last two 
days trying to figure out how to pass  variables between SQL and 
plpgsql, and the examples don't cover all the things you'd think you 
should be able to do but because Postgres SQL doesn't have variables. 
What it does have comes from  psql and they seem to be more like text 
replacement placeholders than variables you can evaluate.


For example, I have a need for a tool that gets an initial record id 
from the user, then it looks up that key and finds the primary keys of 
two other tables related to the firstkey, then it looks those tables up 
and displays the data from each side by side so I can check the 
differences between the records. (Basically, it's a case of data from 
two vendors that carry a common key, and I'm just spot checking). I've 
been using interactive psql, but I thought an app as simple as this is 
in concept wouldn't be so hard to do, but it is if you don't know enough 
of what's in the API like, isn't there a function to enumerate a table's 
attributes?. Or how do you capture the results of a select that calls a 
function in SQL? (e.g.:

\set myResults

:myResults = SELECT myFunction();
-- this won't fly; nor will this:
SELECT INTO :myResults myFunction();

Anyway, I'm begining to see that I had some misconceptions about what 
you can do within SQL and what you're better off doing in plpgsql. Or C. 
Read the whole section on variables in the manual. That's very good 
advice. In fact, peruse it. Because if you read it lightly, you'll have 
to to go over it again and again.


But after reading your note, dynamic SQL seems like it might be just 
what I'm looking for too. Didn't realize it was an option, since I see 
it's documented near the end of the manual, and there's only so much 
RTFMing I can do at a sitting, so that's all new territory to me. But if 
it works like you've sketched out here... well I'm going to try it and see.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Merlin Moncure
On Mon, Mar 7, 2011 at 3:16 PM, Glenn Maynard gl...@zewt.org wrote:
 On Mon, Mar 7, 2011 at 1:13 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard gl...@zewt.org wrote:
  That's often perfectly fine, with read-heavy, single-writer workloads.
 
  I definitely wish there was a way to create indexes to track counters on
  various types of queries, even if it eliminates write concurrency on
  affected writes.  Doing it by hand is a pain.

 beyond what the stats system does you mean?

 The stats system only helps for the most basic case--counting the number of
 rows in a table.  In my experience that's not very common; most of the time
 it's counting total results from some more interesting query, eg. for
 pagination.  In my particular case, I'm caching results for SELECT COUNT(*),
 expr2 FROM table WHERE expr GROUP BY expr2 (for a very limited set of
 expressions).

SELECT COUNT(*) FROM table WHERE expr;

will use index (assuming expr is optimizable and is worth while to
optimize).  Your case might be interesting for cache purposes if expr2
is expensive, but has nothing to do with postgres index usage via
count(*).  mysql/myisam  needs to scan as well in this case -- it
can't magically 'look up' the value as it can for the in filtered
(very special) case... it only differs from pg in that it can skip
heap visibility check because all records are known good (and pg is
moving towards optimizing this case in mostly read only workloads!)

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Dmitriy Igrishin
2011/3/8 Merlin Moncure mmonc...@gmail.com

 On Mon, Mar 7, 2011 at 3:16 PM, Glenn Maynard gl...@zewt.org wrote:
  On Mon, Mar 7, 2011 at 1:13 PM, Merlin Moncure mmonc...@gmail.com
 wrote:
 
  On Sun, Mar 6, 2011 at 2:57 PM, Glenn Maynard gl...@zewt.org wrote:
   That's often perfectly fine, with read-heavy, single-writer workloads.
  
   I definitely wish there was a way to create indexes to track counters
 on
   various types of queries, even if it eliminates write concurrency on
   affected writes.  Doing it by hand is a pain.
 
  beyond what the stats system does you mean?
 
  The stats system only helps for the most basic case--counting the number
 of
  rows in a table.  In my experience that's not very common; most of the
 time
  it's counting total results from some more interesting query, eg. for
  pagination.  In my particular case, I'm caching results for SELECT
 COUNT(*),
  expr2 FROM table WHERE expr GROUP BY expr2 (for a very limited set of
  expressions).

 SELECT COUNT(*) FROM table WHERE expr;

 will use index (assuming expr is optimizable and is worth while to
 optimize).  Your case might be interesting for cache purposes if expr2
 is expensive, but has nothing to do with postgres index usage via
 count(*).  mysql/myisam  needs to scan as well in this case -- it
 can't magically 'look up' the value as it can for the in filtered
 (very special) case...

Exactly!

 it only differs from pg in that it can skip
 heap visibility check because all records are known good (and pg is
 moving towards optimizing this case in mostly read only workloads!)

 merlin

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
// Dmitriy.


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Adrian Klaver
On Monday, March 07, 2011 1:16:11 pm Bill Thoen wrote:

 
 For example, I have a need for a tool that gets an initial record id
 from the user, then it looks up that key and finds the primary keys of
 two other tables related to the firstkey, then it looks those tables up
 and displays the data from each side by side so I can check the
 differences between the records. (Basically, it's a case of data from
 two vendors that carry a common key, and I'm just spot checking). I've
 been using interactive psql, but I thought an app as simple as this is
 in concept wouldn't be so hard to do, but it is if you don't know enough
 of what's in the API like, isn't there a function to enumerate a table's
 attributes?. Or how do you capture the results of a select that calls a
 function in SQL? (e.g.:
 \set myResults
 
 :myResults = SELECT myFunction();
 
 -- this won't fly; nor will this:
 SELECT INTO :myResults myFunction();

A possible solution from here:
http://www.postgresql.org/docs/9.0/interactive/sql-createtableas.html


PREPARE recentfilms(date) AS
  SELECT * FROM films WHERE date_prod  $1;
CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS
  EXECUTE recentfilms('2002-01-01');


 
 Anyway, I'm begining to see that I had some misconceptions about what
 you can do within SQL and what you're better off doing in plpgsql. Or C.
 Read the whole section on variables in the manual. That's very good
 advice. In fact, peruse it. Because if you read it lightly, you'll have
 to to go over it again and again.
 
 But after reading your note, dynamic SQL seems like it might be just
 what I'm looking for too. Didn't realize it was an option, since I see
 it's documented near the end of the manual, and there's only so much
 RTFMing I can do at a sitting, so that's all new territory to me. But if
 it works like you've sketched out here... well I'm going to try it and see.

On Postgres 9.0+ there is also DO
http://www.postgresql.org/docs/9.0/interactive/sql-do.html
-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Merlin Moncure
On Mon, Mar 7, 2011 at 3:16 PM, Bill Thoen bth...@gisnet.com wrote:
 On 3/7/2011 7:55 AM, Adrian Klaver wrote:

 On Monday, March 07, 2011 6:45:11 am Durumdara wrote:

 Hi!

 Thanks!

 How do I create cursor or for select in PGSQL with dynamic way?

 For example

 :tbl = GenTempTableName()

 insert into :tbl...
 insert into :tbl...
 insert into :tbl...

 for select :part_id from :tbl begin
     exec 'select count(*) from subitems where id = ?' using :part_id into

 :sumof

     update :tbl set sumof = :sumof where part_id=:part_id
 end;

 Can you show me same example?

 There are examples in the docs at the link provided. Though I would
 suggest
 reading the pl/pgsql documentation from the beginning to get an idea of
 its
 structure.

 You won't find this easy. I've spent an awful lot of time the last two days
 trying to figure out how to pass  variables between SQL and plpgsql, and the
 examples don't cover all the things you'd think you should be able to do but
 because Postgres SQL doesn't have variables. What it does have comes from
  psql and they seem to be more like text replacement placeholders than
 variables you can evaluate.

 For example, I have a need for a tool that gets an initial record id from
 the user, then it looks up that key and finds the primary keys of two other
 tables related to the firstkey, then it looks those tables up and displays
 the data from each side by side so I can check the differences between the
 records. (Basically, it's a case of data from two vendors that carry a
 common key, and I'm just spot checking). I've been using interactive psql,
 but I thought an app as simple as this is in concept wouldn't be so hard to
 do, but it is if you don't know enough of what's in the API like, isn't
 there a function to enumerate a table's attributes?. Or how do you capture
 the results of a select that calls a function in SQL? (e.g.:
 \set myResults

 :myResults = SELECT myFunction();
 -- this won't fly; nor will this:
 SELECT INTO :myResults myFunction();

 Anyway, I'm begining to see that I had some misconceptions about what you
 can do within SQL and what you're better off doing in plpgsql. Or C. Read
 the whole section on variables in the manual. That's very good advice. In
 fact, peruse it. Because if you read it lightly, you'll have to to go over
 it again and again.

 But after reading your note, dynamic SQL seems like it might be just what
 I'm looking for too. Didn't realize it was an option, since I see it's
 documented near the end of the manual, and there's only so much RTFMing I
 can do at a sitting, so that's all new territory to me. But if it works like
 you've sketched out here... well I'm going to try it and see.

correct. psql variables are completely client side and IMO, perhaps
controversially, useless. for non-trivial processing you should dip
into the server for pl/pgsql, perhaps the finest data processing
language ever invented, or the application side if you need to manage
transaction state.

recent postgres supports 'DO' commands, allowing to access pl/pgsql
power without creating the function first.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] First production install - general advice

2011-03-07 Thread Michael Black

One thing that comes to mind...  Have you tested the install process from start 
to end?
Other than that, a week until to go live is a is time to relax, exhale, prop 
your feet on the desk, and visualize the process thinking of every step, 
automated and manual, what could happen here and is a resolution in place to 
get past it.  If there is not a resolution and if it is not an easy one to work 
up, write it down and do it manually, if the issue arises.  At this stage of 
the project, if all the work is done, tested and accepted, there is not really 
much else that you can do other than relax.

To: pgsql-general@postgresql.org
Subject: [GENERAL] First production install - general advice
Date: Mon, 7 Mar 2011 12:34:19 -0500
From: run...@winning.com



 






 I'm going to go live with my first production install of PostgreSQL 9.0 in 
about a week.  I've done a LOT of reading on the internet and I've purchased 
two very good reference books and actually read them:



PostgreSQL 9 Administration Cookbook

PostgreSQL 9.0 High Performance



I'd like to know if any of you have ever installed a PostgreSQL database for 
production use and then found something you wish you had done differently after 
the fact.   Maybe your directory naming scheme, your backup strategy, 
environment variable settings etc.  In this last week before we go live I'm 
hoping to get a few last minute tidbits of information that me help me avoid 
some common problems.



Thank you,

Rick








  

Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Glenn Maynard
On Mon, Mar 7, 2011 at 4:35 PM, Merlin Moncure mmonc...@gmail.com wrote:

 SELECT COUNT(*) FROM table WHERE expr;

 will use index (assuming expr is optimizable and is worth while to
 optimize).  Your case might be interesting for cache purposes if expr2
 is expensive, but has nothing to do with postgres index usage via
 count(*).  mysql/myisam  needs to scan as well in this case -- it
 can't magically 'look up' the value as it can for the in filtered
 (very special) case... it only differs from pg in that it can skip
 heap visibility check because all records are known good (and pg is
 moving towards optimizing this case in mostly read only workloads!)


It'll do an index scan, but it's still a scan--linear time over the size of
the set.  That's too expensive for many cases.

My particular case is something like this:

  SELECT COUNT(*), event_time::date FROM events
  WHERE event_time::date = '2011-01-01' AND event_time::date  '2011-02-01'
AND user=50
  GROUP BY event_time::date;

An index on events(user, event_time::date) could optimize this, eg.
effectively maintaining a count of matching rows for each (user, day)
tuple--which is ultimately what I'm doing manually with triggers.  Of
course, it would have a significant cost, in some combination of complexity,
index size and write concurrency, and couldn't be the default behavior for
an index.

-- 
Glenn Maynard


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-03-07 Thread Aleksey Tsalolikhin
On Fri, Mar 4, 2011 at 7:19 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Friday, March 04, 2011 5:11:04 pm Aleksey Tsalolikhin wrote:
 On Fri, Mar 4, 2011 at 4:45 PM, Adrian Klaver adrian.kla...@gmail.com 
 wrote:
  On Friday, March 04, 2011 2:03:23 pm Aleksey Tsalolikhin wrote:
  On Fri, Mar 4, 2011 at 7:53 AM, Adrian Klaver adrian.kla...@gmail.com
 wrote:
  What is the data being stored in the table?
 
  For the main part, it's an XML file, we store it in the third field.
  Our XML files are 13KB - 48 KB in length.
 
  And you are positive that field has not had its storage changed to
  something other than EXTENDED?

 Quite.  I just double checked on both servers at it is EXTENDED for
 everything but
 the timestamp (5th field) which is PLAIN (on both servers).

  From my observation, when the table was under 30 GB in size,
  TOAST compression worked fine; when it got above 35 GB in size,
  now TOAST compression is not working.
 
  More questions:)
  How do you know that?
  I thought the size problem only came to light when you tried to set up
  another server.
  Was there some sort of monitoring going on previous to setting up the new
  server?
  Anything else happen around that time?

 OK, I have to withdraw my observation.  Let me refine it (which I am
 afraid makes
 it less useful):  I've done pg_dump/restore of the database earlier, when
 it was smaller ( 30 GB) and did not notice such a doubling in size; I
 don't think it occurred.  Certainly the last time we moved the database
 from the DR back to primary site, it did not double in size from 1 GB (it
 was around 1 GB then).

 Here is what I did see:  we've had Slony replication running for a
 while (over half
 a year) from Primary to DR; and file system utilization on Primary and
 DR was about
 the same.  (around 75%).

 Also, I had done some pg_dump's / pg_restore's from DR to Dev and Stage,
 and the database size was about 1:1.

 But most recently, I shut down replication, and modified the cluster config
 (added several tables and sequences; plus some tables were modified so
 I wanted to get a clean start on replicating them).  I removed the slony
 schemas and re-created a Slony replication set -- my filesystem was 75%
 full on the master,
 and it hit 100% on the slave!  So I lost my slave!

 Then I tried pg_dump/pg_restore and noticed the same thing, that one table
 doubles in size.

 Last time I did a full Slony re-sync like this was around 30 GB.  Now
 we're up to
 40-50 GB and hit the 1:2 factor.

 I can't think of anything else happening around this time...

 I'm going to try splitting the pg_dump file (40 GB) in half, and load the
 20 GB file, and see how much space it takes up in the database, I'm
 curious if I can replicate the 1:2 swelling with this smaller table.

 Aleksey


 Going over the saga to date.

 1)Slony replication running between production server and DR server for half a
 year or so.
        Where the Postgres versions the same between servers?

Yes, 8.4.4


 2) Replication shut down, cluster configuration modified
        Assuming that the problem table was not one of the ones added correct?

Correct.




 3)In your first email you mentioned upgrading the production server.
        Was this a version upgrade?
                Major or minor upgrade?
                From what to what?
        Otherwise what was the upgrade?

Hardware upgrade only.  Posgres still 8.4.4.


 4)Dump/restore to Dev and Stage seem to be alright.

No, the data doubles in size in the course of the restore.
To any/all of my environments.

 5)The restore to DR server is showing size growth of 2x.
        Is this a fresh instance of Postgres or is it the instance that was 
 under
 Slony replication previously?

This is the instance that was under Slony replication previously.

Dev had been under Slony replication previously.  Stage had not.


Experiment 1:
hypothesis: something about how large my table has grown is causing
the TOAST compression to fail on COPY.
test: pg_dump the big table, cut the dump file in half using
/bin/split, add \. at the end of the file, and load the top half.
result: database is 50 GB in size.  hypothesis proven false.

Experiment 2:
hypothesis: something about Slony is causing the TOAST compression to
be disabled on COPY.
test: load the 50% dump file from experiment 1 above into our Stage
database, which was never touched by Slony.
result: database is 50 GB in size.  hypothesis proven false.


Best,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Merlin Moncure
On Mon, Mar 7, 2011 at 4:26 PM, Glenn Maynard gl...@zewt.org wrote:
 On Mon, Mar 7, 2011 at 4:35 PM, Merlin Moncure mmonc...@gmail.com wrote:

 SELECT COUNT(*) FROM table WHERE expr;

 will use index (assuming expr is optimizable and is worth while to
 optimize).  Your case might be interesting for cache purposes if expr2
 is expensive, but has nothing to do with postgres index usage via
 count(*).  mysql/myisam  needs to scan as well in this case -- it
 can't magically 'look up' the value as it can for the in filtered
 (very special) case... it only differs from pg in that it can skip
 heap visibility check because all records are known good (and pg is
 moving towards optimizing this case in mostly read only workloads!)

 It'll do an index scan, but it's still a scan--linear time over the size of
 the set.  That's too expensive for many cases.

 My particular case is something like this:

   SELECT COUNT(*), event_time::date FROM events
   WHERE event_time::date = '2011-01-01' AND event_time::date  '2011-02-01'
 AND user=50
   GROUP BY event_time::date;

 An index on events(user, event_time::date) could optimize this, eg.
 effectively maintaining a count of matching rows for each (user, day)
 tuple--which is ultimately what I'm doing manually with triggers.  Of
 course, it would have a significant cost, in some combination of complexity,
 index size and write concurrency, and couldn't be the default behavior for
 an index.

create index on events(user, (event_time::date));

select count(*) from events
  where
  (user, event_time::date) = (50,  '2011-01-01')
  and (user, event_time::date)  (50,  '2011-02-01')
  group by event_time::date;

Note the create index will only work above if event_time is of
timestamp (not timestamptz) because of time zone dependency.  Any ad
hoc caching would also have the same problem, if users from different
time zones were hitting the cache -- they could get the wrong answer.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Create unique index or constraint on part of a column

2011-03-07 Thread Ruben Blanco
Hi:

Is there anyway to create a unique index or constraint on part of a column?

Something like this, but something that works ;-)

  ALTER TABLE invoices
  ADD CONSTRAINT cons UNIQUE (EXTRACT(YEAR FROM invoice_date),
innvoice_number);

Thanks for any help.
Ruben,


Re: [GENERAL] Create unique index or constraint on part of a column

2011-03-07 Thread Thomas Kellerer

Ruben Blanco wrote on 08.03.2011 00:30:

Hi:

Is there anyway to create a unique index or constraint on part of a column?

Something like this, but something that works ;-)

   ALTER TABLE invoices
   ADD CONSTRAINT cons UNIQUE (EXTRACT(YEAR FROM invoice_date), 
innvoice_number);

Thanks for any help.
Ruben,


CREATE UNIQUE INDEX idx_cons ON invoices (EXTRACT(YEAR FROM invoice_date), 
innvoice_number);

The only difference to a unique constraint is, that it cannot be used as the 
target of a foreign key constraint.

Regards
Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why count(*) doest use index?

2011-03-07 Thread Glenn Maynard
On Mon, Mar 7, 2011 at 5:58 PM, Merlin Moncure mmonc...@gmail.com wrote:

SELECT COUNT(*), event_time::date FROM events
WHERE event_time::date = '2011-01-01' AND event_time::date 
 '2011-02-01'
  AND user=50
GROUP BY event_time::date;

 select count(*) from events
  where
  (user, event_time::date) = (50,  '2011-01-01')
  and (user, event_time::date)  (50,  '2011-02-01')
  group by event_time::date;


Postgresql is smart enough to know x = 1 and y = 2 is the same as (x, y)
= (1, 2).  Either way you get an index scan at best--better than a seq
scan, to be sure, but still expensive when you have a lot of data per (user,
month) and you're doing a lot of these queries.

Note the create index will only work above if event_time is of
 timestamp (not timestamptz) because of time zone dependency.  Any ad
 hoc caching would also have the same problem, if users from different
 time zones were hitting the cache -- they could get the wrong answer.


It's designed with this in mind.

-- 
Glenn Maynard


Re: [GENERAL] Create unique index or constraint on part of a column

2011-03-07 Thread Jeff Davis
On Mon, 2011-03-07 at 23:30 +, Ruben Blanco wrote:
 Hi:
 
 Is there anyway to create a unique index or constraint on part of a
 column? 
 
 Something like this, but something that works ;-)
 
   ALTER TABLE invoices 
   ADD CONSTRAINT cons UNIQUE (EXTRACT(YEAR FROM invoice_date),
 innvoice_number);

CREATE UNIQUE INDEX invoices_constraint_idx ON invoices
  (EXTRACT(YEAR FROM invoice_date), invoice_number);

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-03-07 Thread Adrian Klaver
On Monday, March 07, 2011 2:45:00 pm Aleksey Tsalolikhin wrote:

 
 Experiment 1:
 hypothesis: something about how large my table has grown is causing
 the TOAST compression to fail on COPY.
 test: pg_dump the big table, cut the dump file in half using
 /bin/split, add \. at the end of the file, and load the top half.
 result: database is 50 GB in size.  hypothesis proven false.
 
 Experiment 2:
 hypothesis: something about Slony is causing the TOAST compression to
 be disabled on COPY.
 test: load the 50% dump file from experiment 1 above into our Stage
 database, which was never touched by Slony.
 result: database is 50 GB in size.  hypothesis proven false.

Hmmm. Another perfectly good line of reasoning shot down. No further thoughts 
at 
this time. May have to sit down with Jack Daniels and have a talk and see if 
anything shakes loose:)

 
 
 Best,
 Aleksey

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unexpected EOF on client connection vs 9.0.3

2011-03-07 Thread Craig Ringer
On 08/03/11 02:49, Piotr Czekalski wrote:

 I've checked and verified that all connections are closed within the
 code, what's more, the problem has appeared just as I've moved server
 from Fedora Linux x86_64 running Postgres 8.4.2 to the Windows and 9.0.3
 (details above) thus I conclude this is not a client problem indeed (the
 failure didn't occure on Linux).

Windows firewall?

You can also see these error reports when the connections are closed
uncleanly, without a proper backend close message. Perhaps you have
client processes crashing? Or doing hard shutdowns where the client code
doesn't get a chance to run any cleanup/dtors/etc?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to tune this query

2011-03-07 Thread Adarsh Sharma

Dear all,

Can anyone Please guide me with some suggestions on how to tune the 
below query as I needed to perform the below query as faster as i can.


I have 3 tables on which the query runs:

pdc_uima=# select 
pg_size_pretty(pg_total_relation_size('page_content_demo'));

pg_size_pretty

1260 MB
pdc_uima=# select 
pg_size_pretty(pg_total_relation_size('metadata_demo'));   
pg_size_pretty


339 MB
pdc_uima=# select 
pg_size_pretty(pg_total_relation_size('loc_context_demo'));

pg_size_pretty

345 MB


My Query is :

explain analyze select 
m.doc_category,p.heading,l.lat,l.lon,p.crawled_page_url,p.category,p.dt_stamp,p.crawled_page_id,p.content 
from  loc_context_demo l,page_content_demo p,metadata_demo m where 
l.source_id=p.crawled_page_id and m.doc_id=l.source_id and 
st_within(l.geom,GeomFromText('POLYGON((19.548124415111626 
73.21900819489186,19.548124415111626 73.21900819489186,19.55011196668719 
73.21994746420259,19.552097947014058 73.22087843652453,19.55408236353752 
73.2218011513938,19.588219714571828 75.1654223522423,19.599133094249137 
76.46053245473952,19.57365361244478 79.69902443272414,19.68652202327923 
82.74135922990342,19.56446013085233 85.15028561045767,19.551174510964337 
85.37052962767306,19.553500408319763 85.37198146688313,19.55582660405639 
85.37341757236464,19.55815307123746 85.37483800206365,19.56047978332553 
85.37624281337641,19.562806714176496 85.37763206315,19.565133838033702 
85.37900580768307,19.567461129522137 85.38036410272655,19.56978856364264 
85.3817070034843,19.572116115766228 85.38303456461405,19.56649262333915 
85.15194545531163,18.773772341648947 84.46107113406764,17.95738291093396 
84.21223929994393,16.939045429366846 
83.74699366402301,15.915601954028702 
83.28824222570091,14.692125537681664 
82.40657922201932,13.869583501048409 81.75586112437654,13.23910975048389 
81.53550253438608,12.607561680274236 
81.31596402018643,11.960089890060914 81.3105660302366,11.961002716398268 
81.3118121189388,11.102247999047648 81.09276935832209,10.230582572954035 
81.08704044732613,9.364677626102125 80.87125821859627,8.484379037020355 
80.65888115596269,7.5953685679122565 80.44798762937165,6.678959105840814 
80.44990760581172,5.756074889890018 80.24361993771154,5.756819343429733 
80.2442993962505,5.757563827399336 80.24498070122854,5.758308340445826 
80.24566385572928,4.83232192901788 80.03636862497382,4.832964922142748 
80.0371046690356,4.833608089257533 80.0378393944808,4.834251429338765 
80.038572803232,4.834894941366702 80.03930489720865,4.835538624325311 
80.04003567832711,5.575253995307823 78.3586811224377,5.82022779480326 
77.52223682832437,6.9742086723828365 
76.89564878408815,7.6455592543043425 76.26930608306816,8.761889779304363 
75.43381068367601,10.059251343658966 74.3840274150521,11.136283050704487 
73.75034557867339,12.187315498051541 
72.89986083146191,13.242658350472773 
72.46589681727389,14.721187899066917 
72.23365448169334,16.384503005199107 
71.77586874336029,17.834343858181125 
71.52762561326514,18.868652843809762 
71.49887565337562,19.487812049094533 
71.48086802014905,19.489698327426513 71.48186192551053,19.89987693684175 
71.46838407646581,20.310716259621934 71.454517020832,20.312680952069726 
71.45872696349684,20.314637217119998 71.46296731473512,20.31658488533959 
71.46723821288163,20.318523784696943 71.47153979566505,20.53302678388929 
71.88565153869924,20.767109171722186 
72.75373018504017,20.791013365997372 73.62713545368305,20.79185810562998 
73.6280821559539,20.79269895778539 73.62902276312589,20.793535942149113 
73.6299573226539,20.79436907831312 73.63088588154903,20.795198385776008 
73.6318084863835,20.796023883943136 73.63272518329538,20.796845592126836 
73.6336360179933,20.79766352954653 73.63454103576112,20.798477715328943 
73.63544028146251,20.799288168508316 73.6363337995455,20.80009490802656 
73.63722163404697,20.800897952733482 
73.63810382859708,19.980139052593813 74.07773531285727,19.98131962229422 
74.0780344216337,19.982501271580563 74.078336024665,19.983684009372077 
74.07864013150498,19.98486784461094 74.07894675180037,19.98605278626243 
74.07925589529141,19.987238843315097 
74.07956757181258,19.988426024780967 
74.07988179129316,19.548124415111626 73.21900819489186))',4326)) and 
m.doc_category='Naxalism'order by p.dt_stamp desc;


Today in the morning , I am shocked to see the result  below :

Sort  (cost=129344.37..129354.40 rows=4013 width=1418) (actual 
time=21377.760..21378.441 rows=4485 loops=1)

  Sort Key: p.dt_stamp
  Sort Method:  quicksort  Memory: 7161kB
  -  Nested Loop  (cost=44490.85..129104.18 rows=4013 width=1418) 
(actual time=267.729..21353.703 rows=4485 loops=1)
-  Hash Join  (cost=44490.85..95466.11 rows=3637 width=73) 
(actual time=255.849..915.092 rows=4129 loops=1)

  Hash Cond: (l.source_id = m.doc_id)
  -  Seq Scan on loc_context_demo l  (cost=0.00..47083.94 
rows=16404 width=18) (actual time=0.065..628.255 rows=17072 loops=1)
   

Re: [GENERAL] How to tune this query

2011-03-07 Thread Jaiswal Dhaval Sudhirkumar
In query some are the repeatative information like below value repeating 3 
times. 

19.548124415111626 73.21900819489186

You can create the spatial index on spatial data which will improve the 
performance of the query  off course ANALYZE after creating index. 

--

Thanks  Regards
Dhaval Jaiswal 



From: pgsql-general-ow...@postgresql.org on behalf of Adarsh Sharma
Sent: Tue 3/8/2011 10:31 AM
To: pgsql-general@postgresql.org
Cc: pgsql-performa...@postgresql.org
Subject: [GENERAL] How to tune this query



Dear all,

Can anyone Please guide me with some suggestions on how to tune the
below query as I needed to perform the below query as faster as i can.

I have 3 tables on which the query runs:

pdc_uima=# select
pg_size_pretty(pg_total_relation_size('page_content_demo'));
 pg_size_pretty

 1260 MB
pdc_uima=# select
pg_size_pretty(pg_total_relation_size('metadata_demo'));  
 pg_size_pretty

 339 MB
pdc_uima=# select
pg_size_pretty(pg_total_relation_size('loc_context_demo'));
 pg_size_pretty

 345 MB


My Query is :

explain analyze select
m.doc_category,p.heading,l.lat,l.lon,p.crawled_page_url,p.category,p.dt_stamp,p.crawled_page_id,p.content
from  loc_context_demo l,page_content_demo p,metadata_demo m where
l.source_id=p.crawled_page_id and m.doc_id=l.source_id and
st_within(l.geom,GeomFromText('POLYGON((19.548124415111626
73.21900819489186,19.548124415111626 73.21900819489186,19.55011196668719
73.21994746420259,19.552097947014058 73.22087843652453,19.55408236353752
73.2218011513938,19.588219714571828 75.1654223522423,19.599133094249137
76.46053245473952,19.57365361244478 79.69902443272414,19.68652202327923
82.74135922990342,19.56446013085233 85.15028561045767,19.551174510964337
85.37052962767306,19.553500408319763 85.37198146688313,19.55582660405639
85.37341757236464,19.55815307123746 85.37483800206365,19.56047978332553
85.37624281337641,19.562806714176496 85.37763206315,19.565133838033702
85.37900580768307,19.567461129522137 85.38036410272655,19.56978856364264
85.3817070034843,19.572116115766228 85.38303456461405,19.56649262333915
85.15194545531163,18.773772341648947 84.46107113406764,17.95738291093396
84.21223929994393,16.939045429366846
83.74699366402301,15.915601954028702
83.28824222570091,14.692125537681664
82.40657922201932,13.869583501048409 81.75586112437654,13.23910975048389
81.53550253438608,12.607561680274236
81.31596402018643,11.960089890060914 81.3105660302366,11.961002716398268
81.3118121189388,11.102247999047648 81.09276935832209,10.230582572954035
81.08704044732613,9.364677626102125 80.87125821859627,8.484379037020355
80.65888115596269,7.5953685679122565 80.44798762937165,6.678959105840814
80.44990760581172,5.756074889890018 80.24361993771154,5.756819343429733
80.2442993962505,5.757563827399336 80.24498070122854,5.758308340445826
80.24566385572928,4.83232192901788 80.03636862497382,4.832964922142748
80.0371046690356,4.833608089257533 80.0378393944808,4.834251429338765
80.038572803232,4.834894941366702 80.03930489720865,4.835538624325311
80.04003567832711,5.575253995307823 78.3586811224377,5.82022779480326
77.52223682832437,6.9742086723828365
76.89564878408815,7.6455592543043425 76.26930608306816,8.761889779304363
75.43381068367601,10.059251343658966 74.3840274150521,11.136283050704487
73.75034557867339,12.187315498051541
72.89986083146191,13.242658350472773
72.46589681727389,14.721187899066917
72.23365448169334,16.384503005199107
71.77586874336029,17.834343858181125
71.52762561326514,18.868652843809762
71.49887565337562,19.487812049094533
71.48086802014905,19.489698327426513 71.48186192551053,19.89987693684175
71.46838407646581,20.310716259621934 71.454517020832,20.312680952069726
71.45872696349684,20.314637217119998 71.46296731473512,20.31658488533959
71.46723821288163,20.318523784696943 71.47153979566505,20.53302678388929
71.88565153869924,20.767109171722186
72.75373018504017,20.791013365997372 73.62713545368305,20.79185810562998
73.6280821559539,20.79269895778539 73.62902276312589,20.793535942149113
73.6299573226539,20.79436907831312 73.63088588154903,20.795198385776008
73.6318084863835,20.796023883943136 73.63272518329538,20.796845592126836
73.6336360179933,20.79766352954653 73.63454103576112,20.798477715328943
73.63544028146251,20.799288168508316 73.6363337995455,20.80009490802656
73.63722163404697,20.800897952733482
73.63810382859708,19.980139052593813 74.07773531285727,19.98131962229422
74.0780344216337,19.982501271580563 74.078336024665,19.983684009372077
74.07864013150498,19.98486784461094 74.07894675180037,19.98605278626243
74.07925589529141,19.987238843315097
74.07956757181258,19.988426024780967
74.07988179129316,19.548124415111626 73.21900819489186))',4326)) and
m.doc_category='Naxalism'order by p.dt_stamp desc;

Today in the morning , I am shocked to see the result  below :

 Sort  (cost=129344.37..129354.40 rows=4013 width=1418) (actual
time=21377.760..21378.441 rows=4485 loops=1)
   Sort