Re: [Architecture] [APIM][C5] Subesource access permissions in store

2017-05-19 Thread Sanjeewa Malalgoda
My idea was different, "Can anyone point me any site/forum which allow you
to edit others comment(*not* approve/reject or *delete entire comment*)".
Delete entire comment support need to be their definitely. No doubt about
that.

Thanks,
sanjeewa.

On Fri, May 19, 2017 at 12:01 PM, Fazlan Nazeem  wrote:

> Hi Sanjeewa,
>
> In facebook, if someone posts a comment on our post, then we have the
> permission to delete that comment even though that comment was not created
> by us.
>
> In a similar manner, shouldn't we at least support delete comment
> permission to a moderator role(api owner or a configurable moderator role)?
>
> On Fri, May 19, 2017 at 11:22 AM, Sanjeewa Malalgoda 
> wrote:
>
>> Can anyone point me any site/forum which allow you to edit others
>> comment(not approve/reject or delete entire comment). I'm just curious :)
>> Think what will happen when someone comment on your blogs, media etc(or
>> even you can think of product comments of most common e commerce
>> platforms).  It will go through approval process or filter and then publish.
>>
>> If we allow someone to edit others comments its a crime IMO :). Hence -1
>> for let someone to edit/update others comment.
>>
>> Thanks,
>> sanjeewa.
>>
>>
>>
>>
>> On Fri, May 19, 2017 at 11:04 AM, Nuwan Dias  wrote:
>>
>>> I think standard forums allow privileged users to moderate comments.
>>> Moderation can be in the form of approving/rejecting comments or in the
>>> form of removing obscene type of comments.
>>>
>>> If we go down the workflow (approval) path, there's much to implement.
>>> Ex: We need to introduce a "state" to the comment, need to implement a
>>> workflow, a callback mechanism, a workflow cleanup, etc. But if we just
>>> write a piece of code which allows a pre-configured role to remove, edit
>>> comments, I think the implementation is much simpler.
>>>
>>> On Fri, May 19, 2017 at 10:52 AM, Sanjeewa Malalgoda 
>>> wrote:
>>>


 On Fri, May 19, 2017 at 10:43 AM, Bhathiya Jayasekara <
 bhath...@wso2.com> wrote:

> Hi Sanjeewa,
>
> On Thu, May 18, 2017 at 5:09 PM, Sanjeewa Malalgoda  > wrote:
>
>> I don't think its worth to get complete permission model for comments
>> as well. Like bhathiya mentioned only comment owner is allowed to
>> update/delete his comment. That is the normal behavior. Also i feel its
>> better if we can have work flow support for comments(by default this need
>> to disabled). Once something posted if we let someone else to modify its
>> not nice.
>>
>
> Admin or similar role might be needing moderator capability for
> comments. WDYT? To support that we can easily use existing permission
> model.
>
 I haven't see a single website or forum which allows one person to edit
 others comments. We cannot edit or update any others comment :) Its not
 Only thing admin or other person can do is approve/reject comment or delete
 it. It means workflow + delete permission to only super user. WDYT?

 Thanks,
 sanjeewa.


> Thanks,
> Bhathiya
>
>
>>
>> Thanks,
>> sanjeewa.
>>
>> On Thu, May 18, 2017 at 11:20 AM, Fazlan Nazeem 
>> wrote:
>>
>>> Hi Nuwan/Bhathiya,
>>>
>>> On Tue, May 9, 2017 at 10:19 AM, Nuwan Dias  wrote:
>>>
 I think what Bhathiya is suggesting is to bring in our usual
 permissions model (in APIM 3.0.0) to comments as well. This will 
 require
 more data to be saved in the DB but will address the issue at hand.


>>> Are you suggesting we should have an extra column in Comments table
>>> in the db to fulfill this?
>>>
>>> I am thinking of using the AM_API_PERMISSION column introduced in
>>> AM_API table to decide on who can update/delete a specific comment. 
>>> Apart
>>> from the user who commented, If a user can delete or update an API he 
>>> can
>>> delete/update any comment for that API. This way we do not need to save
>>> extra information in the db, and can use the api permission information 
>>> to
>>> control the comment actions.
>>>
>>> There are two levels of permissions required here. One is "who can
 add/update/remove comments in general" and the other is "whose 
 comments can
 I update/remove".

 The first can be simply achieved using scopes. Basically say which
 scope is permitted for the /comments API. The second can be solved by 
 using
 our usual permission model in APIM v3.0.0.

 On Tue, May 9, 2017 at 10:10 AM, Ayyoob Hamza 
 wrote:

> This won't tackle the problem Musthaq suggested which requires
> validation in the backend.
>
> *Ayyoob Hamza*
> *Senior Software Engineer*
> WSO2 Inc.; http://wso2.com
> email: ayy...@wso2.com cell: +94 77 1681010 <%2B94%2077%2

Re: [Architecture] [APIM][C5] Subesource access permissions in store

2017-05-19 Thread Bhathiya Jayasekara
Here are some common examples:

https://help.github.com/articles/editing-a-comment/
https://en.support.wordpress.com/manage-comments/

Thanks,
Bhathiya

On Fri, May 19, 2017 at 12:33 PM, Sanjeewa Malalgoda 
wrote:

> My idea was different, "Can anyone point me any site/forum which allow you
> to edit others comment(*not* approve/reject or *delete entire comment*)".
> Delete entire comment support need to be their definitely. No doubt about
> that.
>
> Thanks,
> sanjeewa.
>
> On Fri, May 19, 2017 at 12:01 PM, Fazlan Nazeem  wrote:
>
>> Hi Sanjeewa,
>>
>> In facebook, if someone posts a comment on our post, then we have the
>> permission to delete that comment even though that comment was not created
>> by us.
>>
>> In a similar manner, shouldn't we at least support delete comment
>> permission to a moderator role(api owner or a configurable moderator role)?
>>
>> On Fri, May 19, 2017 at 11:22 AM, Sanjeewa Malalgoda 
>> wrote:
>>
>>> Can anyone point me any site/forum which allow you to edit others
>>> comment(not approve/reject or delete entire comment). I'm just curious :)
>>> Think what will happen when someone comment on your blogs, media etc(or
>>> even you can think of product comments of most common e commerce
>>> platforms).  It will go through approval process or filter and then publish.
>>>
>>> If we allow someone to edit others comments its a crime IMO :). Hence -1
>>> for let someone to edit/update others comment.
>>>
>>> Thanks,
>>> sanjeewa.
>>>
>>>
>>>
>>>
>>> On Fri, May 19, 2017 at 11:04 AM, Nuwan Dias  wrote:
>>>
 I think standard forums allow privileged users to moderate comments.
 Moderation can be in the form of approving/rejecting comments or in the
 form of removing obscene type of comments.

 If we go down the workflow (approval) path, there's much to implement.
 Ex: We need to introduce a "state" to the comment, need to implement a
 workflow, a callback mechanism, a workflow cleanup, etc. But if we just
 write a piece of code which allows a pre-configured role to remove, edit
 comments, I think the implementation is much simpler.

 On Fri, May 19, 2017 at 10:52 AM, Sanjeewa Malalgoda >>> > wrote:

>
>
> On Fri, May 19, 2017 at 10:43 AM, Bhathiya Jayasekara <
> bhath...@wso2.com> wrote:
>
>> Hi Sanjeewa,
>>
>> On Thu, May 18, 2017 at 5:09 PM, Sanjeewa Malalgoda <
>> sanje...@wso2.com> wrote:
>>
>>> I don't think its worth to get complete permission model for
>>> comments as well. Like bhathiya mentioned only comment owner is allowed 
>>> to
>>> update/delete his comment. That is the normal behavior. Also i feel its
>>> better if we can have work flow support for comments(by default this 
>>> need
>>> to disabled). Once something posted if we let someone else to modify its
>>> not nice.
>>>
>>
>> Admin or similar role might be needing moderator capability for
>> comments. WDYT? To support that we can easily use existing permission
>> model.
>>
> I haven't see a single website or forum which allows one person to
> edit others comments. We cannot edit or update any others comment :) Its
> not Only thing admin or other person can do is approve/reject comment or
> delete it. It means workflow + delete permission to only super user. WDYT?
>
> Thanks,
> sanjeewa.
>
>
>> Thanks,
>> Bhathiya
>>
>>
>>>
>>> Thanks,
>>> sanjeewa.
>>>
>>> On Thu, May 18, 2017 at 11:20 AM, Fazlan Nazeem 
>>> wrote:
>>>
 Hi Nuwan/Bhathiya,

 On Tue, May 9, 2017 at 10:19 AM, Nuwan Dias 
 wrote:

> I think what Bhathiya is suggesting is to bring in our usual
> permissions model (in APIM 3.0.0) to comments as well. This will 
> require
> more data to be saved in the DB but will address the issue at hand.
>
>
 Are you suggesting we should have an extra column in Comments table
 in the db to fulfill this?

 I am thinking of using the AM_API_PERMISSION column introduced in
 AM_API table to decide on who can update/delete a specific comment. 
 Apart
 from the user who commented, If a user can delete or update an API he 
 can
 delete/update any comment for that API. This way we do not need to save
 extra information in the db, and can use the api permission 
 information to
 control the comment actions.

 There are two levels of permissions required here. One is "who can
> add/update/remove comments in general" and the other is "whose 
> comments can
> I update/remove".
>
> The first can be simply achieved using scopes. Basically say which
> scope is permitted for the /comments API. The second can be solved by 
> using
> our usual permission mod

Re: [Architecture] [MB] Best Approach to write unit tests for DAO Layer ?

2017-05-19 Thread Dharshana Warusavitharana
Hi Fazlan,

By using docker you are writing an Integration test, not a unit test. If
you are using integration approach with docker its waste to test with
in-memory databases. Just use read database like mysql.

But still, if you need to validate DAO in unit layer (at component build
time) you need to mock these database layers.


The importance of having unit test layer is you can validate any issue
before it reaches the feature and bundle with the product. Once it bundles
with the produce cost of fix is high. So the point of the unit test is
eliminating that.

So if you are talking about unit test, they must at least cover this
aspect. Else forget this test layer and write end to end integration test
using docker or what ever. But they are Integration tests and have the cost
i mentioned above.

Thank you,
Dharshana.

On Fri, May 19, 2017 at 11:53 AM, Malaka Gangananda 
wrote:

> Hi,
>
> Actually  JavaDB do have network drivers [1].
>
> [1] http://db.apache.org/derby/papers/DerbyTut/ns_intro.html
>
> Thanks,
>
> On Fri, May 19, 2017 at 11:35 AM, Uvindra Dias Jayasinha  > wrote:
>
>> FYI let me give some details regarding how we are testing the APIM DAO
>> layer for C5.
>>
>> 1. The DAO layer is an interface that the rest of our code interacts with
>> in order to store and retrieve data. We mock the DOA layer and can control
>> its behaviour to unit test how the rest of our code behaves when
>> interacting with it.
>>
>> 2. The implementation of the DAO interface will actually be communicating
>> with the database. Since this is the case unit testing the DAO
>> implementation does not give much of a benefit. So when it comes to testing
>> the actual DAO implementation we are running automated integration tests
>> with various DB docker images running(We test against H2, MySQL, Oracle,
>> PostgreSQL, SQLServer)
>>
>>
>> I believe trying to unit test the DAO implementation will only give you a
>> false sense of security. You are better off doing actual integration tests
>> for these
>>
>>
>> On 19 May 2017 at 10:53, Sanjiva Weerawarana  wrote:
>>
>>> I didn't realize there was a version of Derby in the JDK! Yes we should
>>> support it as a real DB now and can we even use it in production?? That
>>> would be awesome as it'll reduce complexity for smaller deployments - just
>>> download and run.
>>>
>>> Earlier IIRC Derby didn't have networked drivers and therefore couldn't
>>> be set up for simple 2-node HA. If that has changed that's great.
>>>
>>> Sanjiva.
>>>
>>> On Fri, May 19, 2017 at 9:31 AM, Asanka Abeyweera 
>>> wrote:
>>>
 Does this mean we are adding Derby to the list of supported RDBMS for
 MB 4.0.0?

 On Fri, May 19, 2017 at 9:05 AM, Pumudu Ruhunage 
 wrote:

> Can we consider javaDB(Derby)[1] which is part of JDK. since it's
> shipped with jdk, it'll be more suitable for unit tests instead of going
> for external databases/frameworks.
> Since we are not using any vendor-specific sql's in DAO it
> should support all required sql syntaxes without any issue.
>
> [1] http://www.oracle.com/technetwork/java/javadb/overview/j
> avadb-156712.html
>
> Thanks,
>
> On Fri, May 19, 2017 at 8:11 AM, Pamod Sylvester 
> wrote:
>
>> (+) Adding @architecture
>>
>> On Thu, May 18, 2017 at 11:34 AM, Asanka Abeyweera > > wrote:
>>
>>> Are we planning to use stored procedures? If yes better to use a
>>> framework that is flexible enough.
>>>
>>> On Thu, May 18, 2017 at 10:59 AM, Ramith Jayasinghe >> > wrote:
>>>
 if you want to mess with the database/data, this is the lib for
 that (regardless of the test type).

 On Thu, May 18, 2017 at 10:48 AM, Manuri Amaya Perera <
 manu...@wso2.com> wrote:

> @Hasitha Actually that was for integration tests. I guess Ramith's
> suggestion would be better for unit tests. When writing integration 
> tests
> we could look into the possibility of having containerized databases.
>
> Thanks,
> Manuri
>
> On Thu, May 18, 2017 at 10:42 AM, Ramith Jayasinghe <
> ram...@wso2.com> wrote:
>
>> I propose using http://dbunit.sourceforge.net.
>> easy api. and allows you to insert data into database before the
>> test and then clean up etc etc.
>>
>>
>> On Thu, May 18, 2017 at 10:40 AM, Fazlan Nazeem > > wrote:
>>
>>>
>>>
>>> On Thu, May 18, 2017 at 10:39 AM, Hasitha Hiranya <
>>> hasit...@wso2.com> wrote:
>>>
 Hi Manuri,

 Was this approach taken for unit tests or integration tests?

 Thanks

>>>
>>> This approach was taken for integration testing in APIM.
>>>
>>> For unit testing we are using Mockito framework for mocking out
>>>

Re: [Architecture] [MB] Best Approach to write unit tests for DAO Layer ?

2017-05-19 Thread Fazlan Nazeem
Hi Darshana,

For unit testing, the correct approach is to mock the db layer as you have
pointed out. But that alone is not sufficient for DAO testing. We need
integration tests that would validate that our code integrates fine with
the database. For this, Docker approach would be beneficial because it
provides the ability to test with several databases.



On Fri, May 19, 2017 at 12:56 PM, Dharshana Warusavitharana <
dharsha...@wso2.com> wrote:

> Hi Fazlan,
>
> By using docker you are writing an Integration test, not a unit test. If
> you are using integration approach with docker its waste to test with
> in-memory databases. Just use read database like mysql.
>
> But still, if you need to validate DAO in unit layer (at component build
> time) you need to mock these database layers.
>
>
> The importance of having unit test layer is you can validate any issue
> before it reaches the feature and bundle with the product. Once it bundles
> with the produce cost of fix is high. So the point of the unit test is
> eliminating that.
>
> So if you are talking about unit test, they must at least cover this
> aspect. Else forget this test layer and write end to end integration test
> using docker or what ever. But they are Integration tests and have the cost
> i mentioned above.
>
> Thank you,
> Dharshana.
>
> On Fri, May 19, 2017 at 11:53 AM, Malaka Gangananda 
> wrote:
>
>> Hi,
>>
>> Actually  JavaDB do have network drivers [1].
>>
>> [1] http://db.apache.org/derby/papers/DerbyTut/ns_intro.html
>>
>> Thanks,
>>
>> On Fri, May 19, 2017 at 11:35 AM, Uvindra Dias Jayasinha <
>> uvin...@wso2.com> wrote:
>>
>>> FYI let me give some details regarding how we are testing the APIM DAO
>>> layer for C5.
>>>
>>> 1. The DAO layer is an interface that the rest of our code interacts
>>> with in order to store and retrieve data. We mock the DOA layer and can
>>> control its behaviour to unit test how the rest of our code behaves when
>>> interacting with it.
>>>
>>> 2. The implementation of the DAO interface will actually be
>>> communicating with the database. Since this is the case unit testing the
>>> DAO implementation does not give much of a benefit. So when it comes to
>>> testing the actual DAO implementation we are running automated integration
>>> tests with various DB docker images running(We test against H2, MySQL,
>>> Oracle, PostgreSQL, SQLServer)
>>>
>>>
>>> I believe trying to unit test the DAO implementation will only give you
>>> a false sense of security. You are better off doing actual integration
>>> tests for these
>>>
>>>
>>> On 19 May 2017 at 10:53, Sanjiva Weerawarana  wrote:
>>>
 I didn't realize there was a version of Derby in the JDK! Yes we should
 support it as a real DB now and can we even use it in production?? That
 would be awesome as it'll reduce complexity for smaller deployments - just
 download and run.

 Earlier IIRC Derby didn't have networked drivers and therefore couldn't
 be set up for simple 2-node HA. If that has changed that's great.

 Sanjiva.

 On Fri, May 19, 2017 at 9:31 AM, Asanka Abeyweera 
 wrote:

> Does this mean we are adding Derby to the list of supported RDBMS for
> MB 4.0.0?
>
> On Fri, May 19, 2017 at 9:05 AM, Pumudu Ruhunage 
> wrote:
>
>> Can we consider javaDB(Derby)[1] which is part of JDK. since it's
>> shipped with jdk, it'll be more suitable for unit tests instead of going
>> for external databases/frameworks.
>> Since we are not using any vendor-specific sql's in DAO it
>> should support all required sql syntaxes without any issue.
>>
>> [1] http://www.oracle.com/technetwork/java/javadb/overview/j
>> avadb-156712.html
>>
>> Thanks,
>>
>> On Fri, May 19, 2017 at 8:11 AM, Pamod Sylvester 
>> wrote:
>>
>>> (+) Adding @architecture
>>>
>>> On Thu, May 18, 2017 at 11:34 AM, Asanka Abeyweera <
>>> asank...@wso2.com> wrote:
>>>
 Are we planning to use stored procedures? If yes better to use a
 framework that is flexible enough.

 On Thu, May 18, 2017 at 10:59 AM, Ramith Jayasinghe <
 ram...@wso2.com> wrote:

> if you want to mess with the database/data, this is the lib for
> that (regardless of the test type).
>
> On Thu, May 18, 2017 at 10:48 AM, Manuri Amaya Perera <
> manu...@wso2.com> wrote:
>
>> @Hasitha Actually that was for integration tests. I guess
>> Ramith's suggestion would be better for unit tests. When writing
>> integration tests we could look into the possibility of having
>> containerized databases.
>>
>> Thanks,
>> Manuri
>>
>> On Thu, May 18, 2017 at 10:42 AM, Ramith Jayasinghe <
>> ram...@wso2.com> wrote:
>>
>>> I propose using http://dbunit.sourceforge.net.
>>> easy api. and allows you to insert data in

Re: [Architecture] [MB] Best Approach to write unit tests for DAO Layer ?

2017-05-19 Thread Rajith Roshan
Hi,


On Fri, May 19, 2017 at 12:56 PM, Dharshana Warusavitharana <
dharsha...@wso2.com> wrote:

> Hi Fazlan,
>
> By using docker you are writing an Integration test, not a unit test. If
> you are using integration approach with docker its waste to test with
> in-memory databases. Just use read database like mysql.
>
> But still, if you need to validate DAO in unit layer (at component build
> time) you need to mock these database layers.
>
I think there is slight confusion about unit test and integration test.
In APIM we mock DAO interface which are our unit tests. When we use docker
images the DAO layer does get tested against the actual DB. This is also
done at the component building time. As C4 these are not run when the
product is running. These are still executed when components are building.
  Since these tests are actually running against a database, these are not
unit tests, they are actually integration tests used at the component
building time.

Thanks!
Rajith

>
>
> The importance of having unit test layer is you can validate any issue
> before it reaches the feature and bundle with the product. Once it bundles
> with the produce cost of fix is high. So the point of the unit test is
> eliminating that.
>
> So if you are talking about unit test, they must at least cover this
> aspect. Else forget this test layer and write end to end integration test
> using docker or what ever. But they are Integration tests and have the cost
> i mentioned above.
>
> Thank you,
> Dharshana.
>
> On Fri, May 19, 2017 at 11:53 AM, Malaka Gangananda 
> wrote:
>
>> Hi,
>>
>> Actually  JavaDB do have network drivers [1].
>>
>> [1] http://db.apache.org/derby/papers/DerbyTut/ns_intro.html
>>
>> Thanks,
>>
>> On Fri, May 19, 2017 at 11:35 AM, Uvindra Dias Jayasinha <
>> uvin...@wso2.com> wrote:
>>
>>> FYI let me give some details regarding how we are testing the APIM DAO
>>> layer for C5.
>>>
>>> 1. The DAO layer is an interface that the rest of our code interacts
>>> with in order to store and retrieve data. We mock the DOA layer and can
>>> control its behaviour to unit test how the rest of our code behaves when
>>> interacting with it.
>>>
>>> 2. The implementation of the DAO interface will actually be
>>> communicating with the database. Since this is the case unit testing the
>>> DAO implementation does not give much of a benefit. So when it comes to
>>> testing the actual DAO implementation we are running automated integration
>>> tests with various DB docker images running(We test against H2, MySQL,
>>> Oracle, PostgreSQL, SQLServer)
>>>
>>>
>>> I believe trying to unit test the DAO implementation will only give you
>>> a false sense of security. You are better off doing actual integration
>>> tests for these
>>>
>>>
>>> On 19 May 2017 at 10:53, Sanjiva Weerawarana  wrote:
>>>
 I didn't realize there was a version of Derby in the JDK! Yes we should
 support it as a real DB now and can we even use it in production?? That
 would be awesome as it'll reduce complexity for smaller deployments - just
 download and run.

 Earlier IIRC Derby didn't have networked drivers and therefore couldn't
 be set up for simple 2-node HA. If that has changed that's great.

 Sanjiva.

 On Fri, May 19, 2017 at 9:31 AM, Asanka Abeyweera 
 wrote:

> Does this mean we are adding Derby to the list of supported RDBMS for
> MB 4.0.0?
>
> On Fri, May 19, 2017 at 9:05 AM, Pumudu Ruhunage 
> wrote:
>
>> Can we consider javaDB(Derby)[1] which is part of JDK. since it's
>> shipped with jdk, it'll be more suitable for unit tests instead of going
>> for external databases/frameworks.
>> Since we are not using any vendor-specific sql's in DAO it
>> should support all required sql syntaxes without any issue.
>>
>> [1] http://www.oracle.com/technetwork/java/javadb/overview/j
>> avadb-156712.html
>>
>> Thanks,
>>
>> On Fri, May 19, 2017 at 8:11 AM, Pamod Sylvester 
>> wrote:
>>
>>> (+) Adding @architecture
>>>
>>> On Thu, May 18, 2017 at 11:34 AM, Asanka Abeyweera <
>>> asank...@wso2.com> wrote:
>>>
 Are we planning to use stored procedures? If yes better to use a
 framework that is flexible enough.

 On Thu, May 18, 2017 at 10:59 AM, Ramith Jayasinghe <
 ram...@wso2.com> wrote:

> if you want to mess with the database/data, this is the lib for
> that (regardless of the test type).
>
> On Thu, May 18, 2017 at 10:48 AM, Manuri Amaya Perera <
> manu...@wso2.com> wrote:
>
>> @Hasitha Actually that was for integration tests. I guess
>> Ramith's suggestion would be better for unit tests. When writing
>> integration tests we could look into the possibility of having
>> containerized databases.
>>
>> Thanks,
>> Manuri
>>
>> On Thu, May 18, 2017 at

Re: [Architecture] Force Delete Identity Providers

2017-05-19 Thread Farasath Ahamed
Another aspect to think when we delete an IDP would be the associated user
accounts to that IDP[1].  I think we can also show the number of affected
user accounts (user accounts that are associated with the IDP we are
deleting) in the warning screen as proposed by Malithi.

I too think it would be better to have a discussion on this to identify all
the cases that will be affected by IDP deletion and decide on the best
approach to handle them.


[1] https://docs.wso2.com/display/IS530/Associating+User+Accounts

Farasath Ahamed
Software Engineer, WSO2 Inc.; http://wso2.com
Mobile: +94777603866
Blog: blog.farazath.com
Twitter: @farazath619 




On Fri, May 19, 2017 at 10:05 AM, Malithi Edirisinghe 
wrote:

>
>
> On Fri, May 19, 2017 at 9:19 AM, Ishara Karunarathna 
> wrote:
>
>>
>>
>> On Fri, May 19, 2017 at 1:15 AM, Malithi Edirisinghe 
>> wrote:
>>
>>> Hi All,
>>>
>>> So in order to support force delete an identity provider, we have to
>>> first identify the places the respective identity provider can be referred
>>> and then we need to decide on the options we have, on removing those
>>> references.
>>> Basically, an identity provider is referred by a service provider in
>>> authentications steps and/or as a outbound provisioning connector. So I
>>> think we can have below options.
>>>
>>> 1. In authentication steps
>>> - If the respective IdP is the only step being configured
>>> Here we can simply remove it and set the local and outbound config of
>>> default SP (Even when there's no local and outbound config the default SP
>>> config is being picked).
>>> - If IdP is configured as a step among multiple authentication steps
>>> Here unless it's being specifically configured to be used to pick
>>> subject identifier or subject attributes, we can simply remove it. If it's
>>> configured to pick the subject identifier or attributes, we can follow a
>>> pattern like configuring the immediate step to pick the identifier or
>>> attributes. So, if it's the last, start from first step.
>>> - If IdP is configured as multi option in any step
>>> Here we can simply remove it, so the step will have only rest of the
>>> options
>>>
>>> 2. In outbound provisioning
>>> Here we can simply remove the reference of the IdP as a outbound
>>> provisioning connector.
>>> Yet, whatever we do, it should be upon confirmation of the user to force
>>> delete and the way the IdP is removed from SP references should be properly
>>> recorded in audit logs. In addition, I think it's better if we can notify
>>> the user on which SPs are affected and some info with that regard.
>>> Also, when asking for the confirmation from user to force delete, it
>>> would be better if can indicate how many SPs are getting affected.
>>> So at the moment we restrict the IdP deletion by checking for references
>>> with [1]. So I think we can simply introduce a similar method to the
>>> service API, that checks the SPs being referenced, to be invoked when
>>> requesting confirmation. Then upon confirmation deletion can be performed
>>> as above.
>>>
>> I think its hard to provide a generic, Customers may have different
>> usecases and customization around this. Automatically deleting them can be
>> a risk.
>> Even if we delete them automatically customers may have to go back and
>> modify SP configurations accordingly.
>>
>>
> It's the customers decision to force delete or not. We can highlight the
> consequences. As I said above, it's important to record the SPs effected
> and how they are effected. IMO, we should generate some report and notify.
> So that, if someone decide to force delete, before proceeding, he knows
> that it's getting effected to the SPs configured and the consequences and
> after performing he knows who are effected and what he may have to
> reconfigure.
>
>
>>
>>
>>
>>>
>>> [1]  https://github.com/wso2/carbon-identity-framework/blob/mast
>>> er/components/idp-mgt/org.wso2.carbon.idp.mgt/src/main/java/
>>> org/wso2/carbon/idp/mgt/dao/IdPManagementDAO.java#L1759
>>>
>>> Thanks,
>>> Malithi.
>>>
>>> On Thu, May 18, 2017 at 1:22 PM, Prabath Siriwardena 
>>> wrote:
>>>


 On Thu, May 18, 2017 at 12:09 AM, Ishara Karunarathna >>> > wrote:

> Hi,
>
> On Wed, May 17, 2017 at 10:14 PM, Prabath Siriwardena <
> prab...@wso2.com> wrote:
>
>> At the moment we can't delete an identity provider, if its associated
>> with one or more service providers.
>>
>> Also - for the user there is no way to find out the associated
>> service providers for a given identity provider - without going through
>> each and every service provider config.
>>
>> This is fine (or just okay) if we have 2 or 3 service providers in
>> the system - but its not the case today.
>>
>> Can we provide a feature to force delete an identity provider? If not
>> at the UI - at least at the API level..
>>
> There are some issues if

Re: [Architecture] {APIM 3.0.0} Allowing admin user to customize Product REST APIs.

2017-05-19 Thread Ishara Cooray
By considering above discussion and the off line chat we had, it was
concluded that,

we allow the default resource to scope mapping remain in the Swagger doc
and move whatever resource to scope mappings a user needs to
deployment.yaml which is the global configuration file for the product.

This way it avoids the problems encounter if the swagger is given to edit
and it does not have the over head of adding multiple configuration files.

We can define separate name spaces for publisher/store/admin in deployment
yaml.
In each we can define configs for resource to scope mapping.

And then we can read configs in the deploymant.yaml by accessing
configuration map itself from below method in org.wso2.carbon.kernel.configp
rovider.*ConfigProvider*

public Map getConfigurationMap(String namespace)


Thanks & Regards,
Ishara Cooray
Senior Software Engineer
Mobile : +9477 262 9512
WSO2, Inc. | http://wso2.com/
Lean . Enterprise . Middleware

On Tue, May 16, 2017 at 9:01 AM, Lakmali Baminiwatta 
wrote:

>
>
> On 15 May 2017 at 14:41, Nuwan Dias  wrote:
>
>> What is the benefit of using operationId instead of the resource path? In
>> the current Swagger we do not have operationIds defined right?
>>
>
> Since it will be just a string, it is less error prone than defining the
> resource path and also even when the resource path is changed in a major
> release, we can keep the operationId unchanged. Yeah, we don't have them
> defined right now.
>
> If we introduce operationIds just for this use case, as developers we need
>> to spend time thinking of operationIds per resource, make sure we don't
>> duplicate operationIds and more importantly never change operationIds in
>> newer versions :).
>>
>
> Through swagger validation we can ensure is is not duplicated. However, as
> you said we don't have a way to ensure no one changes them in new releases.
>
> To me at least something in the form of [http_method] [path] [scope], ex:
>> POST /apis foo, is more natural than artificially built operationIds. So
>> unless there's a huge benefit in using operationIds vs resource paths, I
>> think we should just stick to the resource paths.
>>
>
> If we use operationIds, they should be self descriptive and not just
> unique strings. However, I am ok with the resource path. It is just that I
> think defining operationId will be more easier for the users.
>
>>
>> Regarding scope to role mapping, that is only required for the Key
>> Manager (IS). Since it is the KM who issues and validates tokens, this
>> mapping is only required by the KM AFAIU.
>>
>
>> On Thu, May 11, 2017 at 3:14 PM, Lakmali Baminiwatta 
>> wrote:
>>
>>> If we are to avoid migration of modified scopes, I think we have to go
>>> with the second approach of defining resource to scope mapping in an
>>> optional config file. However, rather than defining the resource path in
>>> this file, how about using an unique identifier per operation? In swagger,
>>> we can define an *operationId* per operation which must be unique
>>> [1][2]. This way even if a resource path changes in a major release, 
>>> *operationId
>>> *won't change*. *
>>>
>>> BTW we also have to allow configuring scope to role mapping.
>>>
>>> [1] http://swagger.io/specification/
>>> [2] http://petstore.swagger.io/v2/swagger.json
>>>
>>> Thanks,
>>> Lakmali
>>>
>>> On 10 May 2017 at 11:26, Nuwan Dias  wrote:
>>>
 So it seems Sanjeewa's and my view points are clear on this.

 1. Sanjeewa basically says let users (sys-admins) edit the Swagger file
 that define the product REST API. Objective is to avoid duplicating
 resource to scope mappings elsewhere.

 2. I basically say maintain an optional config file so that users
 (sys-admins) can declare the resource to scope mappings they "want to
 override only" in that file. Objective is to separate user configs from
 product configs to minmize risk in someone playing with the product API.

 The pros and cons of each approach have been discussed throughly. So we
 basically need more ideas from others now. Either better solutions or a
 preference towards one of the suggested ones.

 On Wed, May 10, 2017 at 11:20 AM, Nuwan Dias  wrote:

> No Sanjeewa, in the method I'm proposing the system "will not break"
> even if someone goes and puts Japanese characters in the config file. That
> is by design.
>
> One design principle from 3.0.0 onwards is to have no migration script
> involved. In the method I'm proposing we avoid migration 100% (for this
> part). I personally think that is a huge gain.
>
> On Tue, May 9, 2017 at 8:52 PM, Sanjeewa Malalgoda 
> wrote:
>
>>
>>
>> On Tue, May 9, 2017 at 4:57 PM, Nuwan Dias  wrote:
>>
>>> Regarding adding entries to the config file, you don't need to even
>>> open the swagger file. What you need to do is to find the resource from 
>>> the
>>> docs and enter it into the config file. By expect

Re: [Architecture] [APIM][C5] Removing "Blocked" state from API lifecycle

2017-05-19 Thread Sanjeewa Malalgoda
One other issue i see with ballerina editing or setting throttling tiers
is, business API owners need to handle that complexity.
Usually developers will develop API upto some point and let business owners
to handle it. Then they should be able to change API life cycle, temporary
blocking etc in simple manner. As a business API owner or system
administrator i don't want to go and edit ballerina. So i think its better
to keep block in life cycle states.

Client applications will not implement to handle blocked situations. But
there are client applications which can handle throttle out scenarios and
some other error codes. We should not mislead those clients.

Thanks,
sanjeewa.

On Fri, May 19, 2017 at 10:51 AM, Nuwan Dias  wrote:

> These APIs are consumed by Apps. Apps don't understand what "Blocked"
> means. If an API is blocked, an App will throw an error irrespective of
> what the error response is. I'm pretty sure no one writes an App expecting
> an API to be blocked.
>
> In that case the only user set to whom this error response makes sense are
> to the API testers who are going to test this API using tools like CuRL
> during the period it is blocked. I think that is a very very small user
> percentage and the API will soon be unblocked anyway. Therefore I still
> think its a waste to burn "Blocked" as a standard state in the API
> Lifecycle, specially when we have many alternatives :).
>
> On Fri, May 19, 2017 at 10:42 AM, Ishara Cooray  wrote:
>
>> The provided workarounds for blocking an api is fine with respect to
>> developer p.o.v
>> But is it providing the proper end user experience?
>>
>> End user(who is invoking the api) will not see the correct error message
>> unless it has sent a customized error messages for this blocking scenario.
>> Will not this introduce  more work for developer?
>>
>> It will be only a single click for developer to make an api 'Blocked' if
>> it has the life cycle state and end user will also receive correct message.
>>
>> So UX p.o.v i think having Blocked state is better.
>>
>> wdyt?
>>
>> Thanks & Regards,
>> Ishara Cooray
>> Senior Software Engineer
>> Mobile : +9477 262 9512 <+94%2077%20262%209512>
>> WSO2, Inc. | http://wso2.com/
>> Lean . Enterprise . Middleware
>>
>> On Fri, May 19, 2017 at 9:49 AM, Nuwan Dias  wrote:
>>
>>> Blocking an API temporarily can be a valid scenario. And we already have
>>> 3 ways of doing it (1 for admin 2 for API developer). What I'm saying is
>>> that "Blocked" is never a standard state in any SDLC. So what's so special
>>> about an API LC? It is true that older versions of the product had this as
>>> a LC state, but I think it was wrong to have done that.
>>>
>>> @Lalaji, an API publisher has full control of his API. I don't think
>>> having a state called blocked and making it go through an approval adds a
>>> lot of value. Because there are many ways he can block his api, such as by
>>> changing the endpoint, changing the endpoint throttle limits, changing the
>>> code (ballerina). If I'm not approved to set a LC state as blocked, there
>>> are many other ways to block my API anyway. So I don't see it as a value
>>> addition.
>>>
>>> On Fri, May 19, 2017 at 9:37 AM, Lalaji Sureshika 
>>> wrote:
>>>
 Hi,

 If we remove the 'blocked' state from  API lifecycle and if we keep the
 other options [set throttling limit/ballerina config change] to do API
 blocking,we will loose setting workflow extension to the particular blocked
 state.[Eg scenario-acknowledge users that API is temporally blocked via a
 custom workflow]..Isn't that with this,we are going to limit a capability?

 Thanks;

 On Thu, May 18, 2017 at 3:44 PM, Lakshman Udayakantha <
 lakshm...@wso2.com> wrote:

> Hi,
>
> Don't we have an extensible API lifecycle states in c5 implementation?
> If we have any user who doesn't want this blocked state can remove from
> state configuration and who wants this blocked state can keep this state 
> in
> configuration.
> WDYT?
>
> Thanks,
> Lakshman
>
> On Thu, May 18, 2017 at 3:22 PM, Nuwan Dias  wrote:
>
>> If by any chance an API Developer wants to block his entire API
>> temporarily, he has two options.
>>
>> 1) Set the endpoint limit to 0req/min
>> 2) Use a temporary ballerina to send an error back to the customer.
>>
>> On Thu, May 18, 2017 at 12:06 PM, Sanjeewa Malalgoda <
>> sanje...@wso2.com> wrote:
>>
>>>
>>>
>>> On Wed, May 17, 2017 at 12:03 PM, Nuwan Dias 
>>> wrote:
>>>
 I agree that "Blocked" is never a standard state in any SDLC.
 Therefore I don't think its right to have a state called Blocked in 
 the API
 Lifecycle as well.

>>> There are existing users who heavily use this feature. If we are
>>> going to disable then we need to provide alternative. Lets think i'm API
>>> developer and i have my back end

Re: [Architecture] [APIM][C5] Removing "Blocked" state from API lifecycle

2017-05-19 Thread Lakmal Warusawithana
IMO normally in SDLC there is a state call MAINTENANCE and all
functionality described in this thread falling into that. Seems like we
have used wrong word call BLOCKED in previous versions. But from uses point
of view they should able to put an API into maintenance mode without having
much effort.

On Fri, May 19, 2017 at 6:37 PM, Sanjeewa Malalgoda 
wrote:

> One other issue i see with ballerina editing or setting throttling tiers
> is, business API owners need to handle that complexity.
> Usually developers will develop API upto some point and let business
> owners to handle it. Then they should be able to change API life cycle,
> temporary blocking etc in simple manner. As a business API owner or system
> administrator i don't want to go and edit ballerina. So i think its better
> to keep block in life cycle states.
>
> Client applications will not implement to handle blocked situations. But
> there are client applications which can handle throttle out scenarios and
> some other error codes. We should not mislead those clients.
>
> Thanks,
> sanjeewa.
>
> On Fri, May 19, 2017 at 10:51 AM, Nuwan Dias  wrote:
>
>> These APIs are consumed by Apps. Apps don't understand what "Blocked"
>> means. If an API is blocked, an App will throw an error irrespective of
>> what the error response is. I'm pretty sure no one writes an App expecting
>> an API to be blocked.
>>
>> In that case the only user set to whom this error response makes sense
>> are to the API testers who are going to test this API using tools like CuRL
>> during the period it is blocked. I think that is a very very small user
>> percentage and the API will soon be unblocked anyway. Therefore I still
>> think its a waste to burn "Blocked" as a standard state in the API
>> Lifecycle, specially when we have many alternatives :).
>>
>> On Fri, May 19, 2017 at 10:42 AM, Ishara Cooray  wrote:
>>
>>> The provided workarounds for blocking an api is fine with respect to
>>> developer p.o.v
>>> But is it providing the proper end user experience?
>>>
>>> End user(who is invoking the api) will not see the correct error message
>>> unless it has sent a customized error messages for this blocking scenario.
>>> Will not this introduce  more work for developer?
>>>
>>> It will be only a single click for developer to make an api 'Blocked' if
>>> it has the life cycle state and end user will also receive correct message.
>>>
>>> So UX p.o.v i think having Blocked state is better.
>>>
>>> wdyt?
>>>
>>> Thanks & Regards,
>>> Ishara Cooray
>>> Senior Software Engineer
>>> Mobile : +9477 262 9512 <+94%2077%20262%209512>
>>> WSO2, Inc. | http://wso2.com/
>>> Lean . Enterprise . Middleware
>>>
>>> On Fri, May 19, 2017 at 9:49 AM, Nuwan Dias  wrote:
>>>
 Blocking an API temporarily can be a valid scenario. And we already
 have 3 ways of doing it (1 for admin 2 for API developer). What I'm saying
 is that "Blocked" is never a standard state in any SDLC. So what's so
 special about an API LC? It is true that older versions of the product had
 this as a LC state, but I think it was wrong to have done that.

 @Lalaji, an API publisher has full control of his API. I don't think
 having a state called blocked and making it go through an approval adds a
 lot of value. Because there are many ways he can block his api, such as by
 changing the endpoint, changing the endpoint throttle limits, changing the
 code (ballerina). If I'm not approved to set a LC state as blocked, there
 are many other ways to block my API anyway. So I don't see it as a value
 addition.

 On Fri, May 19, 2017 at 9:37 AM, Lalaji Sureshika 
 wrote:

> Hi,
>
> If we remove the 'blocked' state from  API lifecycle and if we keep
> the other options [set throttling limit/ballerina config change] to do API
> blocking,we will loose setting workflow extension to the particular 
> blocked
> state.[Eg scenario-acknowledge users that API is temporally blocked via a
> custom workflow]..Isn't that with this,we are going to limit a capability?
>
> Thanks;
>
> On Thu, May 18, 2017 at 3:44 PM, Lakshman Udayakantha <
> lakshm...@wso2.com> wrote:
>
>> Hi,
>>
>> Don't we have an extensible API lifecycle states in c5
>> implementation? If we have any user who doesn't want this blocked state 
>> can
>> remove from state configuration and who wants this blocked state can keep
>> this state in configuration.
>> WDYT?
>>
>> Thanks,
>> Lakshman
>>
>> On Thu, May 18, 2017 at 3:22 PM, Nuwan Dias  wrote:
>>
>>> If by any chance an API Developer wants to block his entire API
>>> temporarily, he has two options.
>>>
>>> 1) Set the endpoint limit to 0req/min
>>> 2) Use a temporary ballerina to send an error back to the customer.
>>>
>>> On Thu, May 18, 2017 at 12:06 PM, Sanjeewa Malalgoda <
>>> sanje...@wso2.com> wrote:
>>>

Re: [Architecture] [PET] Microsoft Dynamics CRM Connector

2017-05-19 Thread Malaka Silva
Hi Kanapriya,

This approach looks good. It'll not be practical to support all the entity
types and there also can be custom fields.

On Fri, May 19, 2017 at 11:33 AM, Kanapriya Kuleswararajan <
kanapr...@wso2.com> wrote:

> Hi All,
>
> In MicrosoftDynamicCRM, there is a method to create [1], update, delete
> entities and etc. But each entity has different or dynamic set of
> parameters [2].
>
> Due to that , Now I'm planning to implement the connector by getting the
> entity type and required payload for that specific entity from the user.
>
> [1] https://msdn.microsoft.com/en-us/library/gg328090.aspx#bkmk_
> basicCreate
> [2] https://msdn.microsoft.com/en-us/library/mt607894.aspx#bkmk_Properties
>
> Any concern on this?
>
> Thanks
> Kanapriya
>
> Kanapriya Kuleswararajan
> Software Engineer | WSO2
> Mobile : - 0774894438 <077%20489%204438>
> Mail : - kanapr...@wso2.com
> LinkedIn : - https://www.linkedin.com/in/kanapriya-kules-94712685/
> 
>
> On Wed, May 17, 2017 at 2:46 PM, Kanapriya Kuleswararajan <
> kanapr...@wso2.com> wrote:
>
>> Hi All,
>>
>> I have planned to implement a Microsoft Dynamics Customer Relationship
>> Management (CRM) connector with the following Methods [1] for initial
>> version.
>>
>> Microsoft Dynamics CRM [2]  is now known as Microsoft Dynamics 365 for
>> Sales, Marketing, and Service. The Web API [3] which is new for Microsoft
>> Dynamics 365 (online & on-premises), provides a development experience that
>> can be used across a wide variety of programming languages, platforms, and
>> devices. The Web API implements the OData (Open Data Protocol), version
>> 4.0, an OASIS standard for building and consuming RESTful APIs over rich
>> data sources.
>> Here We have to register a Dynamics 365 app with Azure Active Directory
>> as mentioned in [4]
>> ( ie, The user must have a Microsoft Dynamics 365 (online) system user
>> account with administrator role for the Microsoft Office 365 subscription).
>> So that it can connect to the Microsoft Dynamics 365 server, authenticate
>> using OAuth [5], and access the web services.
>>
>> [1]
>>
>>- Create - This example creates a new account entity. The response
>>OData-EntityId header contains the Uri of the created entity.
>>- Associate entities on create - To associate new entities to
>>existing entities when they are created and need to set the value of
>>single-valued navigation properties using the @odata.bind annotation.
>>- Create with data returned - All the data from the created record
>>will be returned with a status of 201 (Created).
>>- Retrieve - Returns data for an account entity instance with the
>>primary key value
>>- Retrieve specific properties - To retrieve the entities with the
>>specific property values
>>- Retrieve using an alternate key - If an entity has an alternate key
>>defined, then use the alternate key to retrieve the entity instead of the
>>unique identifier for the entity.
>>- Retrieve a single property value - To retrieve the value of a
>>single property for an entity,
>>- Update - Updates an existing account record with the accountid
>>value
>>- Update with data returned - To retrieve data from an entity you are
>>updating you can compose your PATCH request so that data from the created
>>record will be returned with a status of 200 (OK).
>>- Update a single property value - To update only a single property
>>value
>>- Delete a single property value - To delete the value of a single
>>property
>>- Upsert an entity - An upsert operation is exactly like an update.
>>The difference is that if the entity doesn’t exist it will be created. If
>>it already exists, it will be updated.
>>- Delete - To delete an entity
>>
>> [2] https://en.wikipedia.org/wiki/Microsoft_Dynamics_CRM
>> [3] https://msdn.microsoft.com/en-us/library/mt593051.aspx
>> [4] https://msdn.microsoft.com/en-us/library/mt622431.aspx
>> [5] https://docs.microsoft.com/en-us/azure/active-directory/deve
>> lop/active-directory-protocols-oauth-code
>>
>> Please let me know if you have any suggestions on this?
>>
>> Thanks
>> Kanapriya Kuleswararajan
>> Software Engineer | WSO2
>> Mobile : - 0774894438 <077%20489%204438>
>> Mail : - kanapr...@wso2.com
>> LinkedIn : - https://www.linkedin.com/in/kanapriya-kules-94712685/
>> 
>>
>
>


-- 

Best Regards,

Malaka Silva
Associate Director / Architect
M: +94 777 219 791
Tel : 94 11 214 5345
Fax :94 11 2145300
Skype : malaka.sampath.silva
LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
Blog : http://mrmalakasilva.blogspot.com/

WSO2, Inc.
lean . enterprise . middleware
https://wso2.com/signature
http://www.wso2.com/about/team/malaka-silva/

https://store.wso2.com/store/

Don't make Trees rare, we should keep them with care
__