Hi,
On Fri, Jun 10, 2016 at 10:33 PM, Hasitha Hiranya wrote:
> Hi,
>
> When spark script is running, we want to know if it is executing fine. Ops
> might be interested in monitoring this. If execution is stopped or broken
> they might need to fire alerts.
>
Spark analytics tasks are executed by
Hi Gihan,
Also I saw that we need to pass tenant ID to query these values. That means
the user who has access can get any detail regarding to any tenant. In real
world, what kind of user supposed to access these MBeans?
Thanks
On Fri, Jun 10, 2016 at 10:03 AM, Hasitha Hiranya wrote:
> Hi,
>
>
Hi,
When spark script is running, we want to know if it is executing fine. Ops
might be interested in monitoring this. If execution is stopped or broken
they might need to fire alerts.
With these MBeans can we do that?
Thanks
On Thu, Jun 9, 2016 at 9:38 PM, Nirmal Fernando wrote:
> Hi All,
Hi All,
Isn't it confusing to maintain two analytics repos with both having common
analytics stuff?
And even if there is a valid reason to do this, still it seems that the
repo names are confusing since both (shared-analytics and
carbon-analytics-common) sound the same. Isn't that so?
@dunith, +
Hi Omindu,
On Thu, Jun 9, 2016 at 1:29 AM, Omindu Rathnaweera wrote:
> Hi Fara,
>
> A few questions.
>
> i. How is this going to work in user registration ? Will there be an API
> to retrieve a question set, given the locale?
>
Yes, we are going provide an API for retrieving challenge questions
On Fri, Jun 10, 2016 at 1:25 PM, Malith Jayasinghe wrote:
> Ok sounds good.
>
> On Fri, Jun 10, 2016 at 12:12 PM, Ashen Weerathunga
> wrote:
>
>> Hi Malith,
>>
>> Yes, I agree with your point. We can input set of percentiles at once.
>> But if we want to do that this cannot be implemented as agg
Hi All,
@Damith
These are common libs for portal wizard which is common for analytics
products (analytics-is, analytics-esb, analytics-apim and etc), product-das
and product-cep. shared-analytics repository was created for analytics
products, product-das and product-cep does not have dependencies.
We have done a performance analysis of APIM by running it on a single node
(note: distributed setup will be considered later). The following are some
key observations.
The deployment details and some discussion on the analysis results can be
found in [1]
1) The probability density function of lat
Hi Dunith,
+1 . isn't these should go shared-analytics repository since we are using
it to hold all common artifacts used in analytics effort.
Regards,
Damith.
On Fri, Jun 10, 2016 at 4:55 PM, Dunith Dhanushka wrote:
> Hi all,
>
> Gadgets developed for analytics products (E.g ESB, IS,MB, IoTs
+1 to replace the analytics-wso2_1.0 from the analytics comon repo.
We will maintain a required files from analytics-wso2_1.0 inside
carbon-dashboards just to allow anybody to try out the gadget generation
feature with line-chart sample.
Thanks,
Tanya
On Fri, Jun 10, 2016 at 4:55 PM, Dunith Dhan
*Correction*
If RXTs has the support, we can go for #2, else #1 is a good option.
On Fri, Jun 10, 2016 at 4:49 PM, Rushmin Fernando wrote:
> App Manager team had a quick chat on this and came up with the following
> points.
>
> It is better if we can have the first class concepts like app-name
Hi all,
Gadgets developed for analytics products (E.g ESB, IS,MB, IoTs etc) depend
on JS libraries which are currently been referred from multiple locations.
For instance
1. JS utilities common to all gadgets like wso2gadgets.js and
chart-utils.js (Currently referred from /portal/libs/common-cha
App Manager team had a quick chat on this and came up with the following
points.
It is better if we can have the first class concepts like app-name,
context, version etc .. remain fixed.
But like Sagara mentioned, we should make the custom attributes fields
dynamic in the DTO. For that, we can us
Hi Nayantara,
On Wed, Jun 8, 2016 at 6:31 AM, Nayantara Jeyaraj
wrote:
>
>
Is this a prototype or the actual implementation? Few comments:
- It would be better to make the two pink and blue boxes in the toolbox
more meaningful, without just making them blank.
- Is there a reason to
Hi Gihan/Inosh,
A sample statement for creating a temporary table using this provider would
look like the following:
CREATE TEMPORARY TABLE StateUsage using CarbonJDBC options (dataSource
"MY_DATASOURCE", tableName "state_usage",schema "us_state STRING -i,
polarity INTEGER, usage_avg FLOAT", prim
Hi all,
This is to give a heads up on the enhancement which the IoT-Analytics team
is doing on Template Manager (We've renamed the Execution Manager to
Template Manager now, to better reflect its capabilities).
Usecase:
When it comes to Analytics for IoT, it is very common that users will want
Hi Gokul,
On Fri, Jun 10, 2016 at 2:08 PM, Gokul Balakrishnan wrote:
> Hi all,
>
> In DAS 3.0.x, for interacting with relational databases directly from
> Spark (i.e. bypassing the data access layer), we have hitherto been using
> the JDBC connector that comes directly with Apache Spark (with a
Hi Gokul,
Can you please share a couple of sample Spark SQL queries that use
this updated CarbonJDBC connector?
Regards,
Gihan
On Fri, Jun 10, 2016 at 2:08 PM, Gokul Balakrishnan wrote:
> Hi all,
>
> In DAS 3.0.x, for interacting with relational databases directly from
> Spark (i.e. bypassing
Hi all,
In DAS 3.0.x, for interacting with relational databases directly from Spark
(i.e. bypassing the data access layer), we have hitherto been using the
JDBC connector that comes directly with Apache Spark (with added support
for Carbon datasources).
This connector has contained many issues th
Ok sounds good.
On Fri, Jun 10, 2016 at 12:12 PM, Ashen Weerathunga wrote:
> Hi Malith,
>
> Yes, I agree with your point. We can input set of percentiles at once. But
> if we want to do that this cannot be implemented as aggregate function
> extension since there will be multiple outputs as well
20 matches
Mail list logo