​Hi Shavantha,

Still, we didn't make above suggest changes to our implementation. ​So
current release or SNAPSHOT api-analytics server doesn't have this. But
anyway, relevant product team has to come with a proper load test and see
which configuration will cater their analytics requirement efficiently.

Regards,
Gihan

On Fri, May 27, 2016 at 11:25 AM, Shavantha Weerasinghe <shavan...@wso2.com>
wrote:

> Hi Inosh
>
> So going forward are we to use this approach for setting up the analytic
> server. We are currently working on the api manager analytics setup
>
> regards,
>
> Shavantha Weerasinghe
> Senior Software Engineer QA
> WSO2, Inc.
> lean.enterprise.middleware.
> http://wso2.com
> http://wso2.org
> Tel : 94 11 214 5345
> Fax :94 11 2145300
>
>
> On Wed, May 25, 2016 at 8:10 PM, Inosh Goonewardena <in...@wso2.com>
> wrote:
>
>> Hi,
>>
>> At the moment DAS support both MyISAM and InnoDB, but configured to use
>> MyISAM by default.
>>
>> There are several differences between MYISAM and InnoDB, but what is most
>> relevant with regard to DAS is the difference in concurrency. Basically,
>> MyISAM uses table-level locking and InnoDB uses row-level locking. So, with
>> MyISAM, if we are running Spark queries while publishing data to DAS, in
>> higher TPS it can lead to issues due to the inability of obtaining the
>> table lock by DAL layer to insert data to the table while Spark reading
>> from the same table.
>>
>> However, on the other hand, with InnoDB write speed is considerably slow
>> (because it is designed to support transactions), so it will affect the
>> receiver performance.
>>
>> One option we have in DAS is, we can use two DBs to to keep incoming
>> records and processed records, i.e., EVENT_STORE and PROCESSED_DATA_STORE.
>>
>> For ESB Analytics, we can configure to use MyISAM for EVENT_STORE and
>> InnoDB for PROCESSED_DATA_STORE. It is because in ESB analytics,
>> summarizing up to minute level is done by real time analytics and Spark
>> queries will read and process data using minutely (and higher) tables which
>> we can keep in PROCESSED_DATA_STORE. Since raw table(which data receiver
>> writes data) is not being used by Spark queries, the receiver performance
>> will not be affected.
>>
>> However, in most cases, Spark queries may written to read data directly
>> from raw tables. As mentioned above, with MyISAM this could lead to
>> performance issues if data publishing and spark analytics happens in
>> parallel. So considering that I think we should change the default
>> configuration to use InnoDB. WDYT?
>>
>> --
>> Thanks & Regards,
>>
>> Inosh Goonewardena
>> Associate Technical Lead- WSO2 Inc.
>> Mobile: +94779966317
>>
>> _______________________________________________
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
> _______________________________________________
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
W.G. Gihan Anuruddha
Senior Software Engineer | WSO2, Inc.
M: +94772272595
_______________________________________________
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to