Re: [Dev] [Siddhi] non-matched or expired events in pattern query

2017-09-28 Thread Jayesh Senjaliya
Hi Gobinath,

Thanks for the suggestion about absence pattern, but we wont be able to
upgrade to Siddhi 4 anytime soon.

I am basically at the point where I can get all the relevant (subscribe)
events that happened during the given interval of first arrival of publish
events.

here AllPublisher = all registered publisher-subscriber events ( for each
publisher event )

from registeryTable as s join #publisher as p on p.pid == s.pid
select p.pid, s.sid insert into AllPublisher;


from every(a=AllPublisher) -> s=subscriber[pid == a.pid]
within 1 sec
select a.pid, s.sid
insert into completed_jobs_in_1_sec;

now, i need to find out those events from publisher (a) that dint had any
match from s within 1 sec.

which I was expecting to be available with " insert *expired events* into
not_completed_jobs", but looks like expired events are only available when
window is used. I am also looking at the code to see if i should add this.

Thanks
Jayesh









On Wed, Sep 27, 2017 at 4:21 AM, Gobinath  wrote:

> Hi,
>
> If you can use Siddhi 4 snapshot release, it can be done using the new
> feature 'Absent Pattern' added to Siddhi 4. The query to detect the events
> that do not match the condition within 10 seconds is given below:
>
> from every e1=publisher -> not subscriber[e1.pid == pid] for 10 sec
> select e1.pid
> insert into not_completed_jobs_in_time;
>
> The above query waits for 10 seconds from the arrival of every publisher
> event and if there is no subscriber event with an id satisfying the
> condition arrived within that waiting period, the id of the publisher event
> will be inserted into the not_completed_jobs_in_time stream.
>
> I guess the official document for Siddhi 4 is under construction. So you
> can find more details about absent pattern at [1]
>
> Still, Siddhi 4 is not production ready so I wonder whether you can use
> this feature or not.
>
> [1] http://www.javahelps.com/2017/08/detect-absence-of-events-
> wso2-siddhi.html
>
>
>
>
> On Tue, Sep 26, 2017 at 10:05 PM, Jayesh Senjaliya 
> wrote:
>
>> Hi Grainier,
>>
>> ya, i came across that example page, but i think that does not work in my
>> use-case which is as follow.
>>
>> i have a publish event followed by multiple subscribe event for the same
>> publish job.
>> now i want to catch if certain jobs (publish -> subscribe) has been
>> finished with 10 sec.
>> I have all the registered jobs in db table, which i use to gather all the
>> required publish-subscribe job events.
>>
>> define table jobTable( pid string, sid string);
>> define stream pubStream (pid int, status string);
>> define stream subStream (pid int, sid int, status string);
>>
>> -- this will get all the publish-> subscribe jobs events as master list
>> from pubStream as p join jobTable as t
>> on p.pid == t.pid
>> select p.pid, t.sid insert into allPSJobs;
>>
>> -- this is where i need to do intersection where if subStream event is
>> seen within 2 sec then remove that from master list ( allPSJobs ) if not
>> include that in not_completed_jobs_in_time
>>
>> from every ( a=allPSJobs ) -> s= subStream[sid == a.sid and pid==a.pid ]
>> within 2 sec
>> select s.pid, s.sid insert into completed_jobs_in_time;
>>
>>
>> hope that make sense from what i am trying to do.
>>
>> Thanks
>> Jayesh
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Sep 25, 2017 at 8:39 AM, Grainier Perera 
>> wrote:
>>
>>> Hi Jay,
>>>
>>> You can try something similar to this to get non-matched events during
>>> last 10 secs; You can find some documentation on this as well; link
>>> 
>>>
>>>
>>>
 define stream publisher (pid string, time string);
 define stream subscriber (pid string, sid string, time string);
>>>
>>>
 from publisher#window.time(10 sec)
 select *
 insert expired events into expired_publisher;
>>>
>>>
 from every pub=publisher -> sub=subscriber[pub.pid == pid] or
 exp=expired_publisher[pub.pid == pid]
 select pub.pid as pid, pub.time as time, sub.pid as subPid
 insert into filter_stream;
>>>
>>>
 from filter_stream [(subPid is null)]
 select pid, time
 insert into not_seen_in_last_10_sec_events;
>>>
>>>
>>> Moreover, I didn't get what you meant by "also is there a way to perform
>>> intersection of events based on grouping or time window ?" can you please
>>> elaborate on this?
>>>
>>> Regards,
>>>
>>> On Mon, Sep 25, 2017 at 11:02 AM, Jayesh Senjaliya 
>>> wrote:
>>>
 Hi,

 is there a way to get events that didnt match within the given time
 frame.

 for example:

 define stream publisher (pid string, time string);
 define stream subscriber (pid string, sid string, time string);

 from every (e1=publisher) -> e2=subscriber[e1.pid == pid]
 within 10 sec
 select e1.pid, e2.sid
 insert into seen_in_last_10_sec_events;


 so if i have matching event above, i will see it in
 seen_in_last_10_sec_e

Re: [Dev] [Siddhi] Partition with two attributes of same stream

2017-09-28 Thread Gobinath
Thank you very much.
Your solution works.


Thanks & Regards,
Gobinath

On Thu, Sep 28, 2017 at 3:09 PM, Sriskandarajah Suhothayan 
wrote:

> Apparently, it does not support multiple attributes as partition keys.
> but you can use
> partition with ( str: concat(srcIp,'-', dstIp) of PacketStream ) ...
>
> or use a previous query to content and send as one data.
>
>
>
> On Thu, Sep 28, 2017 at 10:41 PM, Gobinath  wrote:
>
>> Hi,
>>
>> During my recent testing, Siddhi does not allow partition with two
>> attributes of the same stream. For example, the following query throws
>> *SiddhiAppValidationException* with a message partition already exists
>> because the streamId is used to uniquely identify the partition [1].
>>
>> define stream PacketStream (srcIp string, dstIp string, packets int);
>>
>> partition with (srcIp of PacketStream, dstIp of PacketStream)
>> begin
>>   from PacketStream
>>   select srcIp, dstIp, count(packets) as count
>>   insert into OutputStream;
>> end;
>>
>> I wonder whether it is not supported due to any constraints. If there is
>> nothing like that, I can have a look at it.
>>
>> FYI: I tried to change the partition id as a combination of stream id and
>> the attribute name but it does not register a PartitionReceiver for the
>> later one.
>>
>> [1] https://github.com/slgobinath/siddhi/blob/master/modules/sid
>> dhi-query-api/src/main/java/org/wso2/siddhi/query/api/execut
>> ion/partition/Partition.java#L101
>>
>> Thanks & Regards,
>> Gobinath
>>
>> --
>> *Gobinath** Loganathan*
>> Graduate Student,
>> Electrical and Computer Engineering,
>> Western University.
>> Email  : slgobin...@gmail.com
>> Blog: javahelps.com 
>>
>>
>
>
>
> --
>
> *S. Suhothayan*
> Associate Director / Architect
> *WSO2 Inc. *http://wso2.com
> * *
> lean . enterprise . middleware
>
>
> *cell: (+94) 779 756 757 <077%20975%206757> | blog:
> http://suhothayan.blogspot.com/ twitter:
> http://twitter.com/suhothayan  | linked-in:
> http://lk.linkedin.com/in/suhothayan *
>



-- 
*Gobinath** Loganathan*
Graduate Student,
Electrical and Computer Engineering,
Western University.
Email  : slgobin...@gmail.com
Blog: javahelps.com 
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Siddhi] Partition with two attributes of same stream

2017-09-28 Thread Sriskandarajah Suhothayan
Apparently, it does not support multiple attributes as partition keys.
but you can use
partition with ( str: concat(srcIp,'-', dstIp) of PacketStream ) ...

or use a previous query to content and send as one data.



On Thu, Sep 28, 2017 at 10:41 PM, Gobinath  wrote:

> Hi,
>
> During my recent testing, Siddhi does not allow partition with two
> attributes of the same stream. For example, the following query throws
> *SiddhiAppValidationException* with a message partition already exists
> because the streamId is used to uniquely identify the partition [1].
>
> define stream PacketStream (srcIp string, dstIp string, packets int);
>
> partition with (srcIp of PacketStream, dstIp of PacketStream)
> begin
>   from PacketStream
>   select srcIp, dstIp, count(packets) as count
>   insert into OutputStream;
> end;
>
> I wonder whether it is not supported due to any constraints. If there is
> nothing like that, I can have a look at it.
>
> FYI: I tried to change the partition id as a combination of stream id and
> the attribute name but it does not register a PartitionReceiver for the
> later one.
>
> [1] https://github.com/slgobinath/siddhi/blob/master/modules/sid
> dhi-query-api/src/main/java/org/wso2/siddhi/query/api/
> execution/partition/Partition.java#L101
>
> Thanks & Regards,
> Gobinath
>
> --
> *Gobinath** Loganathan*
> Graduate Student,
> Electrical and Computer Engineering,
> Western University.
> Email  : slgobin...@gmail.com
> Blog: javahelps.com 
>
>



-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 <077%20975%206757> | blog:
http://suhothayan.blogspot.com/ twitter:
http://twitter.com/suhothayan  | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Error in wso2 apim 2.1.0 kubernetes deployment in google container engine

2017-09-28 Thread Dileepa Jayakody
Hi Pubudu et al,

The root cause was that the volumes were not getting created in the given
hostPaths at /tmp/data/. at the deployment time.

As suggested by Pubudu, we could get the issue fixed by setting the
StorageClass annotation :
storageclass.beta.kubernetes.io/is-default-class: *false*

Thanks Pubudu and all for the help.

Regards,
Dileepa

On Thu, Sep 28, 2017 at 12:50 PM, Pubudu Gunatilaka 
wrote:

> Hi Ayesh,
>
> This error comes when you don't have synapse artifacts in the
> deployment/server location.  As you are using host path for mounting, make
> sure host mount paths are empty or use the server folder files. There is a
> chance that mounts get exchanged as it dynamically pics the mounts when you
> redeploy the deployment.
>
> One solution is to delete the content in the host mounts when you are
> redeploying the entire deployment. As we are using host path, we need to
> restrict the deployment (ex: wso2apim-manager-worker) to a known host.
> Otherwise, when the pod respins for some reason, it would lose the data
> already had when spins up in another host node. These limitations have been
> tackled in NFS based mounting [2].
>
> [2] - https://github.com/wso2/kubernetes-apim/releases/tag/v2.1.0-2
>
> Thank you!
>
> On Thu, Sep 28, 2017 at 12:19 PM, Ayeshmantha Perera <
> akayeshman...@gmail.com> wrote:
>
>> Hi all,
>>
>> As mentioned in the kubernetes artifacts deployment documentation [1] we
>> have built the base image and deployed k8 artifacts with the pattern 1.
>> Although artifacts got deployed without an error at deployment time, and
>> the analytics dashboard is accessible via :
>> https://wso2apim-analytics/carbon  the API manager is not working as
>> expected.
>> When going through the logs in wso2-apim-worker and
>> wso2-api-manager-worker pods, we can see below error:
>>
>> [2017-09-28 06:21:53,224] FATAL - ServiceBusInitializer Couldn't
>> initialize the ESB...
>> org.apache.synapse.SynapseException: The synapse.xml location
>> ././repository/deployment/server/synapse-configs/default doesn't exist
>> at org.apache.synapse.SynapseControllerFactory.handleFatal(Syna
>> pseControllerFactory.java:121)
>> at org.apache.synapse.SynapseControllerFactory.validatePath(Syn
>> apseControllerFactory.java:113)
>> at org.apache.synapse.SynapseControllerFactory.validate(Synapse
>> ControllerFactory.java:88)
>> at org.apache.synapse.SynapseControllerFactory.createSynapseCon
>> troller(SynapseControllerFactory.java:44)
>> at org.apache.synapse.ServerManager.init(ServerManager.java:103)
>> at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.
>> initESB(ServiceBusInitializer.java:451)
>> at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.
>> activate(ServiceBusInitializer.java:196)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at org.eclipse.equinox.internal.ds.model.ServiceComponent.activ
>> ate(ServiceComponent.java:260)
>> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.a
>> ctivate(ServiceComponentProp.java:146)
>> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.b
>> uild(ServiceComponentProp.java:345)
>> at org.eclipse.equinox.internal.ds.InstanceProcess.buildCompone
>> nt(InstanceProcess.java:620)
>> at org.eclipse.equinox.internal.ds.InstanceProcess.buildCompone
>> nts(InstanceProcess.java:197)
>> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolve
>> r.java:343)
>>
>> We also ran the apim-kubernetes:2.1.0 docker image in the deployed VM
>> instance, and it starts up without above error, so we think this is
>> something related to the kubernetes artifacts.
>>
>> Appreciate if someone can help us to resolve this deployment issue.
>>
>> Regards,
>> Ayeshmantha
>>
>> [1] https://github.com/wso2/kubernetes-apim/tree/v2.1.0-1
>>
>
>
>
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Siddhi] Partition with two attributes of same stream

2017-09-28 Thread Gobinath
Hi,

During my recent testing, Siddhi does not allow partition with two
attributes of the same stream. For example, the following query throws
*SiddhiAppValidationException* with a message partition already exists
because the streamId is used to uniquely identify the partition [1].

define stream PacketStream (srcIp string, dstIp string, packets int);

partition with (srcIp of PacketStream, dstIp of PacketStream)
begin
  from PacketStream
  select srcIp, dstIp, count(packets) as count
  insert into OutputStream;
end;

I wonder whether it is not supported due to any constraints. If there is
nothing like that, I can have a look at it.

FYI: I tried to change the partition id as a combination of stream id and
the attribute name but it does not register a PartitionReceiver for the
later one.

[1]
https://github.com/slgobinath/siddhi/blob/master/modules/siddhi-query-api/src/main/java/org/wso2/siddhi/query/api/execution/partition/Partition.java#L101

Thanks & Regards,
Gobinath

-- 
*Gobinath** Loganathan*
Graduate Student,
Electrical and Computer Engineering,
Western University.
Email  : slgobin...@gmail.com
Blog: javahelps.com 
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Puppet-IS] Recommended Registry Mounting configuration in puppet modules.

2017-09-28 Thread Chandana Napagoda
Hi,

+1. I also believe this is a clean and simple approach for registry
mounting.  When having different DB(schemas) for governance and config
mounts, it will complicate the deployment. In addition to that by having
different target paths, we can separate out different product-related
config registry data.

Further, we have recently updated the Kubernetes related configuration with
the suggested approach.

Regards,
Chandana

On Thu, Sep 28, 2017 at 2:36 PM, Chankami Maddumage 
wrote:

> Hi All,
>
> We are currently working on puppet scripts for IS 5.4.0.
>
> According to our documentation we recommended Registry Mounting using one
> DB and multiple Registry spaces as bellow
>
> 
> jdbc/WSO2RegistryDB
> 
>
> https://localhost:9443/registry";
> >
> instanceid
> sharedregistry
> false
> true
> /
> regadmin@jdbc:mysql://carbondb.mysql-wso2.
> com:3306/REGISTRY_DB?autoReconnect=true
> 
>
> 
> instanceid
> /_system/config
> 
>
> 
> instanceid
> /_system/governance
> 
>
> But in puppet modules we add 2 separate databases for config and
> governance registry ,and point config and governance registry mounts to
> those 2 separate databases.
>
> @Chandana,
> Can you pls confirm  on the  recommended approach for registry mounting,
> therefore we can use the  recommended approach for puppet going forward.
>
>
>
> Best Regards,
>
>
> *Chankami Maddumage*
> Software Engineer - QA Team
> WSO2 Inc; http://www.wso2.com/.
> Mobile: +94 (0) 73096 <%2B94%20%280%29%20773%20381%20250>
>
>


-- 
*Chandana Napagoda*
Associate Technical Lead
WSO2 Inc. - http://wso2.org

*Email  :  chand...@wso2.com **Mobile : +94718169299*

*Blog  :http://blog.napagoda.com  |
http://chandana.napagoda.com *

*Linkedin : http://www.linkedin.com/in/chandananapagoda
*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Puppet-IS] Recommended Registry Mounting configuration in puppet modules.

2017-09-28 Thread Chankami Maddumage
Hi All,

We are currently working on puppet scripts for IS 5.4.0.

According to our documentation we recommended Registry Mounting using one
DB and multiple Registry spaces as bellow


jdbc/WSO2RegistryDB


https://localhost:9443/registry";
>
instanceid
sharedregistry
false
true
/
regadmin@jdbc:mysql://
carbondb.mysql-wso2.com:3306/REGISTRY_DB?autoReconnect=true



instanceid
/_system/config



instanceid
/_system/governance


But in puppet modules we add 2 separate databases for config and governance
registry ,and point config and governance registry mounts to those 2
separate databases.

@Chandana,
Can you pls confirm  on the  recommended approach for registry mounting,
therefore we can use the  recommended approach for puppet going forward.



Best Regards,


*Chankami Maddumage*
Software Engineer - QA Team
WSO2 Inc; http://www.wso2.com/.
Mobile: +94 (0) 73096 <%2B94%20%280%29%20773%20381%20250>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Error in wso2 apim 2.1.0 kubernetes deployment in google container engine

2017-09-28 Thread Pubudu Gunatilaka
Hi Ayesh,

This error comes when you don't have synapse artifacts in the
deployment/server location.  As you are using host path for mounting, make
sure host mount paths are empty or use the server folder files. There is a
chance that mounts get exchanged as it dynamically pics the mounts when you
redeploy the deployment.

One solution is to delete the content in the host mounts when you are
redeploying the entire deployment. As we are using host path, we need to
restrict the deployment (ex: wso2apim-manager-worker) to a known host.
Otherwise, when the pod respins for some reason, it would lose the data
already had when spins up in another host node. These limitations have been
tackled in NFS based mounting [2].

[2] - https://github.com/wso2/kubernetes-apim/releases/tag/v2.1.0-2

Thank you!

On Thu, Sep 28, 2017 at 12:19 PM, Ayeshmantha Perera <
akayeshman...@gmail.com> wrote:

> Hi all,
>
> As mentioned in the kubernetes artifacts deployment documentation [1] we
> have built the base image and deployed k8 artifacts with the pattern 1.
> Although artifacts got deployed without an error at deployment time, and
> the analytics dashboard is accessible via : https://wso2apim-analytics/
> carbon  the API manager is not working as expected.
> When going through the logs in wso2-apim-worker and
> wso2-api-manager-worker pods, we can see below error:
>
> [2017-09-28 06:21:53,224] FATAL - ServiceBusInitializer Couldn't
> initialize the ESB...
> org.apache.synapse.SynapseException: The synapse.xml location
> ././repository/deployment/server/synapse-configs/default doesn't exist
> at org.apache.synapse.SynapseControllerFactory.handleFatal(
> SynapseControllerFactory.java:121)
> at org.apache.synapse.SynapseControllerFactory.validatePath(
> SynapseControllerFactory.java:113)
> at org.apache.synapse.SynapseControllerFactory.validate(
> SynapseControllerFactory.java:88)
> at org.apache.synapse.SynapseControllerFactory.createSynapseController(
> SynapseControllerFactory.java:44)
> at org.apache.synapse.ServerManager.init(ServerManager.java:103)
> at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.initESB(
> ServiceBusInitializer.java:451)
> at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.activate(
> ServiceBusInitializer.java:196)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.eclipse.equinox.internal.ds.model.ServiceComponent.
> activate(ServiceComponent.java:260)
> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.
> activate(ServiceComponentProp.java:146)
> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.
> build(ServiceComponentProp.java:345)
> at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(
> InstanceProcess.java:620)
> at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(
> InstanceProcess.java:197)
> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
>
> We also ran the apim-kubernetes:2.1.0 docker image in the deployed VM
> instance, and it starts up without above error, so we think this is
> something related to the kubernetes artifacts.
>
> Appreciate if someone can help us to resolve this deployment issue.
>
> Regards,
> Ayeshmantha
>
> [1] https://github.com/wso2/kubernetes-apim/tree/v2.1.0-1
>



-- 
*Pubudu Gunatilaka*
Committer and PMC Member - Apache Stratos
Senior Software Engineer
WSO2, Inc.: http://wso2.com
mobile : +94774078049 <%2B94772207163>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] How to catch an error from the REST Task in the BPMN

2017-09-28 Thread Thomas LEGRAND
Hello,

Thank you for your answers but what if my configured JSON path is not
applicable for the returned body in case of an error triggered in the web
service side? I mean, there is two cases :
- I request something and this something is returned by the web service.
The REST task uses the configured JSON path to set an attribute.
- I request something but this something does not exist. So the web service
sends me a 404 HTTP code with another body (like an empty one or even a
JSON object modelizing an error with a business error code and a message).
This body does not correspond at all to the configured JSON path. So, the
exception is triggered:

Unknown Exception occurred
> com.jayway.jsonpath.PathNotFoundException: No results for path: 


The ErrorBoundaryEvent will not "catch" this error. Is there a way to
configure the ErrorBoundaryEvent to catch whatever error is popping, or
should I create a ErrorBoundaryEvent for each error? Meaning that I don't
know what is the name of the error spawning where the JSON path is wrong.

Regards,

Thomas

2017-09-28 6:34 GMT+02:00 Sudharma Subasinghe :

> Hi Thomas,
>
> You can add ErrorBoundaryEvent with error code as* "RestInvokeError"*.
> Please refer [1] as an example.
>
> [1] http://wso2.com/library/articles/2016/04/article-how-
> to-model-bpmn-business-processes-with-wso2-business-process-server/#error
>
> Thanks
> Sudharma
>
> On Wed, Sep 27, 2017 at 8:04 PM, Thomas LEGRAND <
> thomas.legr...@versusmind.eu> wrote:
>
>> Hello there,
>>
>> I would like to catch an error from the REST task in my process.
>> Actually, my distant web service returns a 404 with an empty body if no
>> result was found. If something was found, I map an element from the
>> returned JSON into a variable and that works.
>>
>> In the case of my 404, I have a NPE and I would like to catch it to be
>> able to continue the process but I don't know how to do because the
>> ErrorBoundaryEvent I attached does not work at all.
>>
>> Can you help me, please?
>>
>> Regards,
>>
>> Thomas
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Sudharma Subasinghe,
> Software Engineer,
> WSO2 Inc.
> Email: sudhar...@wso2.com 
> Mobile : +94 710 565 157 <%2B94%20718%20210%20200>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev