Re: [GENERAL] Difficult while acquiring LWLocks

2017-05-03 Thread hariprasath nallasamy
> AFAIK yes this is the correct way to use multiple lwlocks.
>

Thanks.!

Just curious, Is there any other way to do this.?


[GENERAL] Difficult while acquiring LWLocks

2017-05-03 Thread hariprasath nallasamy
Hi all
   There is an use case, where i want some 10 LightWeight Locks and after
9.6 LW locks api's (LWLockAssign) are changed a bit and i am confused too.

 Only reference i cant get was from pg_stat_statement :(

Since GetNamedLWLockTranche method will return the base address of the
specified tranche.

>From pg_init



*" RequestNamedLWLockTranche("Some_10_LWLocks", 10); "*For getting those
locks which were requested from pg_init

*" LWLockPadded *lwLockPadded = GetNamedLWLockTranche("Some_10_LWLocks");
LWLock *lock = &(lwLockPadded[index in 0 to 9]).lock; "*

Is the above code snippet a valid for requesting some 10 LWLocks?


TIA
 harry


Re: [GENERAL] too may LWLocks

2017-03-08 Thread hariprasath nallasamy
oops its my bad implementation.. I was leaking locks and its fixed now.
Thanks for the help!

-harry

On Thu, Mar 9, 2017 at 1:07 AM, Julien Rouhaud <julien.rouh...@dalibo.com>
wrote:

> On Wed, Mar 08, 2017 at 03:34:56PM +0530, hariprasath nallasamy wrote:
> > Hi all
> > I am building an extension using shared memory hash table and for locking
> > hash table i am using LWLocks, but the thing was when i try to run some
> 1k
> > queries one after other, for each query i am getting one LWLock but on
> > executing 200th query i am getting the error *ERROR:  too many LWLocks
> > taken*.
> >
> > But in each query i acquire and release that block. So that lock has to
> be
> > flushed after executing query, but why am i getting this error.?
> >
> > Is this due to *held_lwlocks *in LWLock.c is fixed only to some number
> 200
> > here.
> > Or am i missing something here.?
>
> The most likely reason is that you have some code path in your extension
> where
> you don't release the LWLock.  Without access to the code we can't do much
> more
> to help you I'm afraid.  You could also try on a postgres build having
> LWLOCK_STATS defined.
>
> --
> Julien Rouhaud
> http://dalibo.com - http://dalibo.org
>


[GENERAL] too may LWLocks

2017-03-08 Thread hariprasath nallasamy
Hi all
I am building an extension using shared memory hash table and for locking
hash table i am using LWLocks, but the thing was when i try to run some 1k
queries one after other, for each query i am getting one LWLock but on
executing 200th query i am getting the error *ERROR:  too many LWLocks
taken*.

But in each query i acquire and release that block. So that lock has to be
flushed after executing query, but why am i getting this error.?

Is this due to *held_lwlocks *in LWLock.c is fixed only to some number 200
here.
Or am i missing something here.?


thanks
harry


Re: [GENERAL] Incrementally refreshed materialized view

2016-09-26 Thread hariprasath nallasamy
We also tried to achieve incremental refresh of materialized view and our
solution doesn't solve all of the use cases.

Players:
1) WAL
2) Logical decoding
3) replication slots
4) custom background worker

Two kinds of approaches :
1. Deferred refresh (oracle type of creating log table for each base tables
with its PK and agg's columns old and new values)
  a) Log table for each base table has to be created and this log table
will keep track of delta changes.
  b) UDF is called to refresh the view incrementally - this will
run original materialized view query with the tracked delta PK's in their
where clause. so only rows that are modified/inserted will be touched.
  c) Log table will keep track of changed rows from the data given by
replication slot which uses logical decoding to decode from WAL.
  d) Shared memory is used to maintain the relationship between the
view and its base table. In case of restart they are pushed to maintenance
table.

2. RealTime refresh (update the view whenever we get any change-sets
related to that base tables)
  a) Delta data from the replication slot will be applied to view by
checking the relationship between our delta data and the view definiton.
Here also shared memory and maintenance table are used.
  b) Work completed only for materialized views having single table.

Main disadvantage :
1) Data inconsistency when master failure and also slave doesn't have
replication slot as of now. But 2ndquard guys try to create slots in slave
using some concepts of failover slots. But that doesn't come along with PG
:(.
2) Sum, count and avg are implemented for aggregates(single table) and for
other aggs full refresh comes to play a role.
3) Right join implementation requires more queries to run on the top of
MV's.

So we are on a long way to go and dono whether this is the right path.

Only deferred refresh was pushed to github.
https://github.com/harry-2016/MV_IncrementalRefresh

I wrote a post regarding that in medium.
https://medium.com/@hariprasathnallsamy/postgresql-materialized-view-incremental-refresh-44d1ca742599


[GENERAL] Replication slot on master failure

2016-09-26 Thread hariprasath nallasamy
Hi all
   We are using replication slot for capturing some change sets to
update dependent tables.

   Will there be inconsistency if the master fails and the standby
takes the role of master.?


cheers
-harry