Re: [GENERAL] PostgreSQL DB Replication

2014-08-25 Thread K P Manoj
Hi Rajesh,

Parameter  hot_standby is on on slave server ?. You are trying to change
postgres password on master or slave ?.

Regards
Manoj K P
8050972028


On Tue, Aug 26, 2014 at 10:15 AM, sivananda reddy 
wrote:

> Hi Rajesh
>
> Please share the below information:
>
> 1)postgresql.conf file (Both master and slave)
> 2) database server logs
> 3)recovery.conf file
> 4)pg_hba.conf file
>
> With Regards,
> Sivananda Reddy
> 07385114008
>
>
> On Tue, Aug 26, 2014 at 9:34 AM, wd  wrote:
>
>> What's the output when you try to login? Have your slave configured as a
>> standby server ?
>>
>>
>> On Mon, Aug 25, 2014 at 12:46 PM, Rajesh K  wrote:
>>
>>> Dear Sir,
>>>
>>> I have planned to configure  PostgreSQL 9.2.4 database Master and Slave
>>> replication on Redhat Linux Server 6.As per the documentation i have done
>>> the settings on both Master and Slave.I choose 7 simple step binary
>>> replication method as per the PostgreSQL website.After DB Synchronization
>>> command I  start the slave and master as per the documentation.After that I
>>> am  not able to login as postgres user on slave and not able to change the
>>> password of the user postgres.Kindly help me to done the same .
>>>
>>> Thanks & Regards
>>> Rajesh.K/Pramod
>>> Developer
>>> Cochin
>>> 8129823272
>>>
>>>
>>>
>>>
>>>
>>
>


Re: [GENERAL] PostgreSQL DB Replication

2014-08-25 Thread sivananda reddy
Hi Rajesh

Please share the below information:

1)postgresql.conf file (Both master and slave)
2) database server logs
3)recovery.conf file
4)pg_hba.conf file

With Regards,
Sivananda Reddy
07385114008


On Tue, Aug 26, 2014 at 9:34 AM, wd  wrote:

> What's the output when you try to login? Have your slave configured as a
> standby server ?
>
>
> On Mon, Aug 25, 2014 at 12:46 PM, Rajesh K  wrote:
>
>> Dear Sir,
>>
>> I have planned to configure  PostgreSQL 9.2.4 database Master and Slave
>> replication on Redhat Linux Server 6.As per the documentation i have done
>> the settings on both Master and Slave.I choose 7 simple step binary
>> replication method as per the PostgreSQL website.After DB Synchronization
>> command I  start the slave and master as per the documentation.After that I
>> am  not able to login as postgres user on slave and not able to change the
>> password of the user postgres.Kindly help me to done the same .
>>
>> Thanks & Regards
>> Rajesh.K/Pramod
>> Developer
>> Cochin
>> 8129823272
>>
>>
>>
>>
>>
>


Re: [GENERAL] PostgreSQL DB Replication

2014-08-25 Thread wd
What's the output when you try to login? Have your slave configured as a
standby server ?


On Mon, Aug 25, 2014 at 12:46 PM, Rajesh K  wrote:

> Dear Sir,
>
> I have planned to configure  PostgreSQL 9.2.4 database Master and Slave
> replication on Redhat Linux Server 6.As per the documentation i have done
> the settings on both Master and Slave.I choose 7 simple step binary
> replication method as per the PostgreSQL website.After DB Synchronization
> command I  start the slave and master as per the documentation.After that I
> am  not able to login as postgres user on slave and not able to change the
> password of the user postgres.Kindly help me to done the same .
>
> Thanks & Regards
> Rajesh.K/Pramod
> Developer
> Cochin
> 8129823272
>
>
>
>
>


[GENERAL] PostgreSQL DB Replication

2014-08-25 Thread Rajesh K
Dear Sir,

I have planned to configure  PostgreSQL 9.2.4 database 
Master and Slave replication on Redhat Linux Server 6.As per the 
documentation i have done the settings on both Master and Slave.I choose
 7 simple step binary replication method as per the PostgreSQL website.After DB 
Synchronization 
command I  start the slave and master as per the documentation.After that I am  
not able to login as postgres user on slave and not able to change the password 
of the user postgres.Kindly help me to done 
the same .

Thanks & Regards
Rajesh.K/Pramod
Developer

Cochin
8129823272

Re: [GENERAL] how to query against nested hstore data type

2014-08-25 Thread John R Pierce

On 8/24/2014 7:05 PM, Huang, Suya wrote:


The question is how can I get visits/pages/duration separately from 
the value column so that I can do a sum on those metrics individually.




why not store all this in proper relational tables so you can use proper 
relational queries?




--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] how to query against nested hstore data type

2014-08-25 Thread Huang, Suya
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Monday, August 25, 2014 11:42 PM
To: Huang, Suya; pgsql-general@postgresql.org
Subject: Re: [GENERAL] how to query against nested hstore data type

On 08/24/2014 07:05 PM, Huang, Suya wrote:
> Hi,
>
> It's my test DB, the version is PostgreSQL 9.4beta2. I'm testing with 
> nested hstore feature.
>
> testdb=# \d+ test
>
>   Table "public.test"
>
> Column |  Type  | Modifiers | Storage  | Stats target | Description
>
> ++---+--+--+-
>
> id | text   |   | extended |  |
>
> stats  | hstore |   | extended |  |
>
> testdb=# select * from test;
>
>id   |
>  stats
>
> ---+--
> ---+--
> ---+--
> ---+--
>
> 2b8ea99d60b30 |
> "www.ask.com"=>"{visits=>2.11221003268,pages=>4.22442006537,duration=>
> 197980.836904}", 
> "www.strawberrynet.com"=>"{visits=>2.11221003268,pages=>4.22442006537,duration=>197980.836904}"
>
> (1 row)

So how did you get the above values in?

>
> testdb=# select (each(channel)).key,(each(channel)).value from visits;
>
>key  |value
>
> ---+--
> ---+
>
> www.ask.com   |
> {visits=>2.11221003268,pages=>4.22442006537,duration=>197980.836904}
>
> www.strawberrynet.com |
> {visits=>2.11221003268,pages=>4.22442006537,duration=>197980.836904}
>
> (2 rows)

Did you change tables in the above?

>
> The question is how can I get visits/pages/duration separately from 
> the value column so that I can do a sum on those metrics individually.
>
> Thank you!
>
> Suya
>


--
Adrian Klaver
adrian.kla...@aklaver.com

==

>>Did you change tables in the above?
Changed column name from channel to stats.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] POWA tool

2014-08-25 Thread Raghu Ram
On Fri, Aug 22, 2014 at 4:40 PM, Ramesh T 
wrote:

> How to include pg_stat_statements in postgres.conf.powa is need it.
>
> any help..
>
>
>  Below are the steps to Install POWA Tool:

*Step 1:* Download POWA tool from below Website

> https://github.com/dalibo/powa/archive/REL_1_1.zip

*Step 2:* Unpack the Downloaded file

> [root@localhost tmp]# cd /tmp/
> [root@localhost powa-REL_1_1]# unzip powa-REL_1_1.zip

*Step 3:* Install POWA Software

> [root@localhost powa-REL_1_1]# export PATH=/opt/PostgreSQL/9.3/bin:$PATH
> [root@localhost powa-REL_1_1]# export
> LD_LIBRARY_PATH=/opt/PostgreSQL/9.3/lib:$LD_LIBRARY_PATH
> [root@localhost powa-REL_1_1]# make install
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I. -I.
> -I/opt/PostgreSQL/9.3/include/postgresql/server
> -I/opt/PostgreSQL/9.3/include/postgresql/internal -D_GNU_SOURCE
> -I/opt/local/20140108/fc7f8f12-7861-11e3-aaff-000c29d23b02/include/libxml2
> -I/usr/local/include/libxml2 -I/usr/local/include  -c -o powa.o powa.c
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -fpic
> -L/opt/PostgreSQL/9.3/lib
> -L/opt/local/20140108/fc7f8f12-7861-11e3-aaff-000c29d23b02/lib
> -L/usr/local/lib -Wl,--as-needed
> -Wl,-rpath,'/opt/PostgreSQL/9.3/lib',--enable-new-dtags  -shared -o powa.so
> powa.o
> /bin/mkdir -p '/opt/PostgreSQL/9.3/share/postgresql/extension'
> /bin/mkdir -p '/opt/PostgreSQL/9.3/share/postgresql/extension'
> /bin/mkdir -p '/opt/PostgreSQL/9.3/lib/postgresql'
> /bin/mkdir -p '/opt/PostgreSQL/9.3/doc/postgresql/extension'
> /usr/bin/install -c -m 644 ./powa.control
> '/opt/PostgreSQL/9.3/share/postgresql/extension/'
> /usr/bin/install -c -m 644 ./powa--1.0.sql ./powa--1.1.sql
> '/opt/PostgreSQL/9.3/share/postgresql/extension/'
> /usr/bin/install -c -m 755  powa.so '/opt/PostgreSQL/9.3/lib/postgresql/'
> /usr/bin/install -c -m 644 ./README.md
> '/opt/PostgreSQL/9.3/doc/postgresql/extension/'
>
*Step 4:* Create a POWA database & Create requires extensions

> -bash-4.1$ /opt/PostgreSQL/9.3/bin/psql -p 5435
> Password:
> psql.bin (9.3.5)
> Type "help" for help.
> postgres=# create database powa;
> CREATE DATABASE
> postgres=# \c powa
> You are now connected to database "powa" as user "postgres".
> powa=# create extension pg_stat_statements ;
> CREATE EXTENSION
> powa=# create extension btree_gist ;
> CREATE EXTENSION
> powa=# create extension powa;
> CREATE EXTENSION
> powa=# \dt
>   List of relations
>  Schema |  Name   | Type  |  Owner
> +-+---+--
>  public | powa_functions  | table | postgres
>  public | powa_last_aggregation   | table | postgres
>  public | powa_last_purge | table | postgres
>  public | powa_statements | table | postgres
>  public | powa_statements_history | table | postgres
>  public | powa_statements_history_current | table | postgres
> (6 rows)

*Step 5:*  add "power" & "pg_stat_statements" in the
shared_preload_libraries in postgresql.conf file

> -bash-4.1$ more /opt/PostgreSQL/9.3/data/postgresql.conf |grep
> shared_preload
> # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
> memory
> shared_preload_libraries = 'powa,pg_stat_statements' # (change requires
> restart)


-bash-4.1$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data/ start
> server starting
> -bash-4.1$ 2014-07-25 03:48:20 IST LOG:  registering background worker
> "powa"
> 2014-07-25 03:48:20 IST LOG:  loaded library "powa"
> 2014-07-25 03:48:20 IST LOG:  loaded library "pg_stat_statements"
> 2014-07-25 03:48:20 IST LOG:  redirecting log output to logging collector
> process
> 2014-07-25 03:48:20 IST HINT:  Future log output will appear in directory
> "pg_log".

*Step 6:* Install Mojolicious Software

> [root@localhost ui]# pwd
> /tmp/powa-REL_1_1/ui
> [root@localhost ui]# ls -l
> total 36
> drwxr-xr-x 4 root root 4096 Aug 19  2014 lib
> -rw-r--r-- 1 root root  393 Jul 25 04:05 powa.conf
> -rw-r--r-- 1 root root  393 Aug 19  2014 powa.conf-dist
> drwxr-xr-x 6 root root 4096 Aug 19  2014 public
> -rw-r--r-- 1 root root 2121 Aug 19  2014 README.md
> drwxr-xr-x 2 root root 4096 Aug 19  2014 script
> drwxr-xr-x 2 root root 4096 Aug 19  2014 t
> drwxr-xr-x 6 root root 4096 Aug 19  2014 templates
> -rw-r--r-- 1 root root4 Aug 19  2014 VERSION


[root@localhost ui]# cp powa.conf-dist powa.conf
>

[root@localhost ~]# /usr/bin/curl get.mojolicio.us | sh
>   % Total% Received % Xferd  Average Speed   TimeTime Time
> Current
>  Dload  Upload   Total   SpentLeft
> Speed
>   0540540 0179  0 --:--:-- --:--:-- --:--:--
> 184
>   % Total% Received %

Re: [GENERAL] Query planner question

2014-08-25 Thread Bill Moran
On Mon, 25 Aug 2014 09:09:07 -0700
Jeff Janes  wrote:

> On Wed, Aug 20, 2014 at 6:16 PM, Soni M  wrote:
> 
> > Hi Everyone,
> >
> > I have this query :
> >
> > select t.ticket_id ,
> > tb.transmission_id
> > from ticket t,
> > transmission_base tb
> > where t.latest_transmission_id = tb.transmission_id
> > and t.ticket_number = tb.ticket_number
> > and tb.parse_date > ('2014-07-31');
> >
> > Execution plan: http://explain.depesz.com/s/YAak
> >
> > Indexes on ticket :
> > "ticket_pkey" PRIMARY KEY, btree (ticket_id) CLUSTER
> > "ticket_by_latest_transmission" btree (latest_transmission_id)
> > "ticket_by_ticket_number" btree (ticket_number)
> >
> > This query only returns some portions of rows from ticket table.
> > The question is, Why does postgres need to get all the rows from ticket
> > table in order to complete this query?
> > Can't postgres use indexes to get only needed rows on ticket table?
> >
> > I try set seqscan to off, but still index scan try to get all rows on
> > ticket table.
> > Here's the execution plan : http://explain.depesz.com/s/abH2

That's probably not the best approach, it's likely that something is feeding
the planner wrong information.  An EXPLAIN ANALYZE might reveal if that's the
case.

Some other things to check: are these two tables being analyzed frequently
enough that their statistics are up to date? (EXPLAIN ANALYZE will generally
show if that's a problem too).  It would seem that the planner thinks that
the distribution of tb.ticket_number is large enough that it will probably
have to fetch most of the rows from ticket anyway, which is a logical reason
for it to skip the index and just do a seq scan.  Can you confirm/deny whether
that's the case?  If not, and you're analyzing the tables often enough, you
may need to raise your statistics target on those tables.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query planner question

2014-08-25 Thread Jeff Janes
On Wed, Aug 20, 2014 at 6:16 PM, Soni M  wrote:

> Hi Everyone,
>
> I have this query :
>
> select t.ticket_id ,
> tb.transmission_id
> from ticket t,
> transmission_base tb
> where t.latest_transmission_id = tb.transmission_id
> and t.ticket_number = tb.ticket_number
> and tb.parse_date > ('2014-07-31');
>
> Execution plan: http://explain.depesz.com/s/YAak
>
> Indexes on ticket :
> "ticket_pkey" PRIMARY KEY, btree (ticket_id) CLUSTER
> "ticket_by_latest_transmission" btree (latest_transmission_id)
> "ticket_by_ticket_number" btree (ticket_number)
>
> This query only returns some portions of rows from ticket table.
> The question is, Why does postgres need to get all the rows from ticket
> table in order to complete this query?
> Can't postgres use indexes to get only needed rows on ticket table?
>
> I try set seqscan to off, but still index scan try to get all rows on
> ticket table.
> Here's the execution plan : http://explain.depesz.com/s/abH2
>

If you want to force a nested loop, you probably need to disable the
mergejoin as well, and maybe the hashjoin.  Forcing the planner to do
things the way you want can be difficult.

Cheers,

Jeff


Re: [GENERAL] How to insert either a value or the column default?

2014-08-25 Thread Daniele Varrazzo
On Mon, Aug 25, 2014 at 3:26 PM, Adrian Klaver
 wrote:
> On 08/25/2014 06:32 AM, W. Matthew Wilson wrote:

>> First of all, thanks for showing this trick!
>>
>> But I am confused.  Will this trick stop working in a future version
>> of psycopg2?  Should I avoid using it?
>
>
> Here is the email laying out the issues:
>
> http://www.postgresql.org/message-id/AANLkTi=ym3SCQKCQBtp8RJHUswwAPOpjXYKTXS=ah...@mail.gmail.com

That's correct, thank you Adrian.

Matthew: no, this will correctly work in all the future psycopg2
versions. Even if we started supporting a new protocol, such as the
ISQLParam referred in the mentioned email, it won't be the default in
psycopg2 and it should be enabled on purpose.

The new protocol should be the default in this mythical psycopg3
instead. If something can be imported as psycopg2 it will support the
ISQLQuote protocol by default, hence the Default object as implemented
in this thread will work.

-- Daniele


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to insert either a value or the column default?

2014-08-25 Thread Adrian Klaver

On 08/25/2014 06:32 AM, W. Matthew Wilson wrote:

On Sun, Aug 24, 2014 at 2:50 PM, Daniele Varrazzo
 wrote:

It should be added to the library (it was first discussed in 2003...),
but it's one of these things that will stop working when psycopg will
start using the "extended query protocol" (together with other nifty
features such as string literals for table/columns names) so in my
mind it can only be included when psycopg will be able to do both
client-side parameter interpolation and server-side arguments passing,
and when the distinction between the two strategies will be clear
(this is planned for a future psycopg3 but there is no timeline for it
yet).


First of all, thanks for showing this trick!

But I am confused.  Will this trick stop working in a future version
of psycopg2?  Should I avoid using it?


Here is the email laying out the issues:

http://www.postgresql.org/message-id/AANLkTi=ym3SCQKCQBtp8RJHUswwAPOpjXYKTXS=ah...@mail.gmail.com





Thanks again!





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint exclusion on tables partitioned over range types

2014-08-25 Thread Emanuel Calvo
2014-08-22 14:34 GMT-03:00 Daniele Varrazzo :

> Hello,
>
> we are setting up a partitioned table based on tstzrange in PG 9.3,
> something like:
>
> create table offer (
>during tstzrange not null,
>...
>constraint virtual check (false) no inherit
>);
>
> create table offer_201408 (
> check (during <@ '[2014-08-01Z,2014-09-01Z)'::tstzrange)
> ) inherits (offer);
>
> create table offer_201409 (
> check (during <@ '[2014-09-01Z,2014-10-01Z)'::tstzrange)
> ) inherits (offer);
>
> I haven't found a way to make the planner constraint exclusion kicking in:
>
> =# explain select * from offer where during @>
> '2014-08-03'::timestamptz;
>
> Append  (cost=0.00..27.25 rows=3 width=248)
>->  Seq Scan on offer  (cost=0.00..0.00 rows=1 width=248)
>  Filter: (during @> '2014-08-03 00:00:00+01'::timestamp
> with time zone)
>   ->  Seq Scan on offer_201408  (cost=0.00..13.62 rows=1 width=248)
> Filter: (during @> '2014-08-03 00:00:00+01'::timestamp
> with time zone)
>   ->  Seq Scan on offer_201409  (cost=0.00..13.62 rows=1 width=248)
> Filter: (during @> '2014-08-03 00:00:00+01'::timestamp
> with time zone)
>
> Similar results using tztzrange OP tstzrange operators with OP in &&, @>,
> <@.
>
> Seqscans aside, as these tables are empty so they are expected, I
> wonder if there is a way to organize the operators used in the
> constraints and the ones used in the query so that the query planner
> would be able to exclude some of the tables before querying them, as
> is easy to do implementing range constraints on the base tstz type and
> its ordering operators.
>
> It would be also nice if the always failing constraint on the base
> table could suggest the planner that there is no record to be found
> there: I think this would be easier to implement but not as useful as
> for the ranges.
>
>
I guess you should write down a feature request on the hackers list.
tsrange should be included also on that request.

I tested on the beta version and it does not work either.


-- 
--
Emanuel Calvo http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-25 Thread Adrian Klaver

On 08/25/2014 04:18 AM, hubert depesz lubaczewski wrote:

On Fri, Aug 22, 2014 at 9:21 PM, Alvaro Herrera
mailto:alvhe...@2ndquadrant.com>> wrote:

FWIW this problem was reported also by Andrew Sackville-West at

http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230
I strongly suspect now that the problem is related to the locking of
updated versions as heap_lock_tuple_updated, and perhaps the internal
locking done by EvalPlanQual.  Haven't traced through it.


Is there anything I could tell the developer to do (he's on Mac) so he
could provide more information?


I would say to confirm the bug report Alvaro mentioned it would be good 
to try the importer script against non-9.3 instances of Postgres to see 
if the same thing happens. Also interesting to note that in the bug 
report thread mention is made of a large number of FKs on a table.




depesz



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to query against nested hstore data type

2014-08-25 Thread Adrian Klaver

On 08/24/2014 07:05 PM, Huang, Suya wrote:

Hi,

It’s my test DB, the version is PostgreSQL 9.4beta2. I’m testing with
nested hstore feature.

testdb=# \d+ test

  Table "public.test"

Column |  Type  | Modifiers | Storage  | Stats target | Description

++---+--+--+-

id | text   |   | extended |  |

stats  | hstore |   | extended |  |

testdb=# select * from test;

   id   |
 stats

---+

2b8ea99d60b30 |
"www.ask.com"=>"{visits=>2.11221003268,pages=>4.22442006537,duration=>197980.836904}",
"www.strawberrynet.com"=>"{visits=>2.11221003268,pages=>4.22442006537,duration=>197980.836904}"

(1 row)


So how did you get the above values in?



testdb=# select (each(channel)).key,(each(channel)).value from visits;

   key  |value

---+--

www.ask.com   |
{visits=>2.11221003268,pages=>4.22442006537,duration=>197980.836904}

www.strawberrynet.com |
{visits=>2.11221003268,pages=>4.22442006537,duration=>197980.836904}

(2 rows)


Did you change tables in the above?



The question is how can I get visits/pages/duration separately from the
value column so that I can do a sum on those metrics individually.

Thank you!

Suya




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using fmgr_hook

2014-08-25 Thread Sameer Thakur
Hello,
Thank you for responding
>My experience is that you cannot set breakpoints before the library
>is loaded, so you first have to call a function in the library, then
>you interrupt and set the breakpoint.
I tried to do the following
1. Execute Postgres (now auto_explain is loaded)
2. Start a psql session and attach gdb to forked Postmaster process
3. Now set break point in custom_fmgr_hook
4. Execute select * from now();

Still the breakpoint gets skipped.

Also i checked by putting a breakpoint in explain_ExecutorStart before
starting Postgres (before auto_explain is loaded), and then started
psql session, attached gdb and executed select* from now(), in this
case the debugger does stop at the breakpoint.

The strange thing is i could swear that i had got the debugger to stop
in custom_fmgr_hook , just after i added custom_needs_fmgr_hook and
had figured out that we need to use custom_needs_fmgr_hook and
custom_fmgr_hook together. But i have not been able to reproduce that.
So maybe there is something in what you say. I just cannot nail the
sequence correctly

regards
Sameer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to insert either a value or the column default?

2014-08-25 Thread W. Matthew Wilson
On Sun, Aug 24, 2014 at 2:50 PM, Daniele Varrazzo
 wrote:
> It should be added to the library (it was first discussed in 2003...),
> but it's one of these things that will stop working when psycopg will
> start using the "extended query protocol" (together with other nifty
> features such as string literals for table/columns names) so in my
> mind it can only be included when psycopg will be able to do both
> client-side parameter interpolation and server-side arguments passing,
> and when the distinction between the two strategies will be clear
> (this is planned for a future psycopg3 but there is no timeline for it
> yet).

First of all, thanks for showing this trick!

But I am confused.  Will this trick stop working in a future version
of psycopg2?  Should I avoid using it?

Thanks again!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Way to identify the current session's temp tables within pg_class ?

2014-08-25 Thread Marc Mamin


> -Original Message-
> From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> On Mon, Aug 25, 2014 at 7:48 PM, Marc Mamin 
> wrote:
> > Hello,
> >
> > When different sessions create temp tables with the same name:
> > How can I identify the oid of the one created within the current
> session ?
> 
> You can use pg_my_temp_schema for this purpose. It returns the OID of
> the schema where temporary objects are stored for a given session.
> Note that this returns InvalidOid if no temporary objects are defined.


thanks,

I've also notice that ::regclass only returns the oid of the "accessible" table.
It is sufficient for my need, although it doesn't tell whether the table is 
temporary or not: 

SELECT * FROM pg_class where oid='foo'::regclass

regards,

marc


> Here is an example:
> =# select pg_my_temp_schema();
>  pg_my_temp_schema
> ---
>  0
> (1 row)
> =# create temp table aa (a int);
> CREATE TABLE
> =# SELECT nspname FROM pg_namespace WHERE oid = pg_my_temp_schema();
>   nspname
> ---
>  pg_temp_4
> (1 row)
> 
> Regards,
> --
> Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using fmgr_hook

2014-08-25 Thread Albe Laurenz
Sameer Thakur wrote:
> In the process of implementing my own version of sysdate, i was trying
> to use the fmgr_hook.

[...]

> To debug i have a breakpoint inside custom_fmgr_hook.
> 
> Debugging:
> 1. Start postgres
> 2. Start psql connecting to postgres
> 3. Attach gdb to process spawned off by postmaster  representing psql session.
> 4. execute select * from now();
> 
> Problem:
>  The breakpoint seems to get skipped. Just to be sure i put a
> breakpoint in explain_ExecutorStart and i could debug that function.
> So i am attaching gdb to correct process.
> What am i doing wrong?

My experience is that you cannot set breakpoints before the library
is loaded, so you first have to call a function in the library, then
you interrupt and set the breakpoint.

I don't know if there is a way to get around that with gdb on Linux.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using fmgr_hook

2014-08-25 Thread Sameer Thakur
Hello,
In the process of implementing my own version of sysdate, i was trying
to use the fmgr_hook.
I had a look at the sepgsql contrib module and tried to do the same by
modifying auto_explain just to test using fmgr_hook.

My code changes are:

static needs_fmgr_hook_type prev_needs_fmgr_hook = NULL;
static fmgr_hook_type prev_fmgr_hook = NULL;

static bool custom_needs_fmgr_hook(Oid functionId);
static void custom_fmgr_hook(FmgrHookEventType event,FmgrInfo *flinfo,
Datum *private);

in PG_init(void)
prev_needs_fmgr_hook = needs_fmgr_hook;
needs_fmgr_hook = custom_needs_fmgr_hook;
prev_fmgr_hook = fmgr_hook;
fmgr_hook = custom_fmgr_hook;

in _PG_fini(void)
needs_fmgr_hook=prev_needs_fmgr_hook;
fmgr_hook=prev_fmgr_hook;


static bool custom_needs_fmgr_hook(Oid functionId)
{
return true;
}
void custom_fmgr_hook(FmgrHookEventType event,FmgrInfo *flinfo, Datum *private)
{
if(flinfo->fn_extra == NULL)
{
TimestampTz current_timestamp = GetCurrentTimestamp();
flinfo->fn_extra = palloc(sizeof(TimestampTz));
flinfo->fn_extra = (void*) current_timestamp;
}
}

To debug i have a breakpoint inside custom_fmgr_hook.

Debugging:
1. Start postgres
2. Start psql connecting to postgres
3. Attach gdb to process spawned off by postmaster  representing psql session.
4. execute select * from now();

Problem:
 The breakpoint seems to get skipped. Just to be sure i put a
breakpoint in explain_ExecutorStart and i could debug that function.
So i am attaching gdb to correct process.
What am i doing wrong?

Thank you,
Sameer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Way to identify the current session's temp tables within pg_class ?

2014-08-25 Thread Michael Paquier
On Mon, Aug 25, 2014 at 7:48 PM, Marc Mamin  wrote:
> Hello,
>
> When different sessions create temp tables with the same name:
> How can I identify the oid of the one created within the current session ?

You can use pg_my_temp_schema for this purpose. It returns the OID of
the schema where temporary objects are stored for a given session.
Note that this returns InvalidOid if no temporary objects are defined.
Here is an example:
=# select pg_my_temp_schema();
 pg_my_temp_schema
---
 0
(1 row)
=# create temp table aa (a int);
CREATE TABLE
=# SELECT nspname FROM pg_namespace WHERE oid = pg_my_temp_schema();
  nspname
---
 pg_temp_4
(1 row)

Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-25 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 6:55 PM, Jeff Janes  wrote:

> What transaction isolation level is being used?
>

Sorry for late reply - the user was away for parts of friday, I was away on
weekend, and just now got answer - it's read committed.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-25 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 9:21 PM, Alvaro Herrera 
wrote:

> FWIW this problem was reported also by Andrew Sackville-West at
>
> http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230
> I strongly suspect now that the problem is related to the locking of
> updated versions as heap_lock_tuple_updated, and perhaps the internal
> locking done by EvalPlanQual.  Haven't traced through it.
>

Is there anything I could tell the developer to do (he's on Mac) so he
could provide more information?

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-25 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 8:33 PM, Adrian Klaver 
wrote:

> Not sure, just the combination of parallel operations and remote
> connections seemed to be an avenue to explore. Given that everything is
> local, turns out it was dead end.
> Looking at the pastebin log again, am I reading it right that the first
> process actually COMMITs properly?
> Also is there a trigger in the mix that might be fouling things up?
>

Please note that the pastebin log is split by backend pid, and only in
backend-pid groups sorted by timestamp.

66014 started transaction later, and committed, while 66017, which started
transaction earlier, and actually obtained lock earlier - got killed by
deadlock resolution.

There are no triggers aside from some (~10) fkeys.

depesz


Re: [GENERAL] Query planner question

2014-08-25 Thread Soni M
here's the explain analyze result : http://explain.depesz.com/s/Mvv and
http://explain.depesz.com/s/xxF9

it seems that i need to dig more on query planner parameter.

BTW, thanks all for the helps.


On Sat, Aug 23, 2014 at 4:33 PM, Alban Hertroys  wrote:

> On 23 Aug 2014, at 4:34, Soni M  wrote:
> > On Fri, Aug 22, 2014 at 9:10 PM, Alban Hertroys 
> wrote:
> > On 22 August 2014 14:26, Soni M  wrote:
> > > Currently we have only latest_transmission_id as FK, described here :
> > > TABLE "ticket" CONSTRAINT "fkcbe86b0c6ddac9e" FOREIGN KEY
> > > (latest_transmission_id) REFERENCES transmission_base(transmission_id)
> > >
> > > Change the query to include only FK still result the same:
> > > explain select t.ticket_id ,
> > > tb.transmission_id
> > > from ticket t,
> > > transmission_base tb
> > > where t.latest_transmission_id = tb.transmission_id
> > > and tb.parse_date > ('2014-07-31');
> > > QUERY PLAN
> > >
> --
> > >  Hash Join  (cost=113928.06..2583606.96 rows=200338 width=8)
> > >Hash Cond: (t.latest_transmission_id = tb.transmission_id)
> > >->  Seq Scan on ticket t  (cost=0.00..1767767.26 rows=69990826
> width=8)
> > >->  Hash  (cost=108923.38..108923.38 rows=400374 width=4)
> > >  ->  Index Scan using transmission_base_by_parse_date on
> > > transmission_base tb  (cost=0.00..108923.38 rows=400374 width=4)
> > >Index Cond: (parse_date > '2014-07-31
> 00:00:00'::timestamp
> > > without time zone)
> > > (6 rows)
> >
> > Do you have an index on ticket (latest_transmission_id)?
> >
> > Yes, both t.latest_transmission_id and tb.transmission_id is indexed.
> >
> > Indexes:
> > "transmission_base_pkey" PRIMARY KEY, btree (transmission_id) CLUSTER
> > Indexes:
> > "ticket_by_latest_transmission" btree (latest_transmission_id)
>
> Okay, so we got those indexes. So much for the low-hanging fruit.
>
> From the above plan we learn that the database estimates[1] that 400k rows
> from transmission match your condition (parse_date > '2014-07-31’). The
> ticket table has a foreign key to that table, which suggests a 1:n
> relationship. It also has around 70M rows, or at least the database seems
> to think that about that amount will match those 400k transmissions.
>
> That means that if on average 175 (=70M/400k) ticket ID’s match a
> transmission ID, the database would be needing all those 70M rows anyway -
> and even if it only needs every 175th row, a sequential scan is not a
> particularly inefficient way to go about this.
> The alternative is a whole lot of index lookups, probably not in the same
> order as either the index or the rows on disk, meaning quite a bit of
> random disk I/O.
>
> I’m suspecting that the cost estimates for this query with seq-scans
> disabled aren’t very different, provided doing so comes up with a
> comparable plan.
>
> Things you might want to verify/try:
> * Are those estimated numbers of rows accurate? If not, is autovacuum (or
> scheduled vacuum) keeping up with the amount of data churn on these tables?
> Do you collect a sufficiently large sample for the statistics?
> * How much bloat is in these tables/indexes?
> * Did you change planner settings (such as disabling bitmap scans; I kind
> of expected one here) or did you change cost estimate parameters?
> * Does it help to put an index on transmission (parse_date,
> transmission_id)?
> * If none of that helps, we’re going to need the output of explain analyze
> - that will probably take long to create, so you might as well start with
> that and do the other stuff at the side.
>
> What kind of hardware are these disks on? Is it possible that disk I/O on
> this particular machine is relatively slow (relative to the seq/random cost
> factor for disk access as specified in your postgresql.conf)?
>
> Cheers,
>
> Alban Hertroys
>
> [1] You did not provide explain analyse output, so we only have estimates
> to work with.
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>


-- 
Regards,

Soni Maula Harriz


[GENERAL] Way to identify the current session's temp tables within pg_class ?

2014-08-25 Thread Marc Mamin
Hello,

When different sessions create temp tables with the same name:
How can I identify the oid of the one created within the current session ?

Thanks,

Marc Mamin