[ 
https://issues.apache.org/jira/browse/EAGLE-635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hao Chen updated EAGLE-635:
---------------------------
    Description: 
h2. Changes

* Refactor policy parser and validator for richer plan details and better 
performance
* Decouple PolicyExecutionPlan and PolicyValidation

h2. API
* Parse API

{code}
POST /metadata/policies/parse 
Accept-Type: text

from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX#window.timeBatch(2 min) select cmd, 
user, count() as total_count group by cmd,user insert into 
HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT"
{code}

* Validation API
{code}
POST /metadata/policies/validate 
Accept-Type: application/json

{
   "name": "hdfsPolicy",
   "description": "hdfsPolicy",
   "inputStreams": [
      "HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX"
   ],
   "outputStreams": [
      "HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT"
   ],
   "definition": {
      "type": "siddhi",
      "value": "from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX#window.timeBatch(2 
min) select cmd, user, count() as total_count group by cmd,user insert into 
HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT "
   },
   "partitionSpec": [
      {
         "streamId": "hdfs_audit_log_enriched_stream",
         "type": "GROUPBY",
         "columns" : [
            "cmd"
         ]
      }
   ],
   "parallelismHint": 2
}

{code}

h2. Use Cases
* *parse*: could continuously call `parse` API aside during user keeps typing 
to verify the syntax and automatically generate input/output/partition , as it 
won't call back-end db, so will be very fast.
* *validate*: when user finishes defining policy, the api will validate the 
metadata end2end

  was:
h2. Changes

* Refactor policy parser and validator for richer plan details and better 
performance
* Decouple PolicyExecutionPlan and PolicyValidation

h2. API
* Parse API

{code}
POST /metadata/policies/parse 
Accept-Type: text

from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX#window.timeBatch(2 min) select cmd, 
user, count() as total_count group by cmd,user insert into 
HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT"
{code}

* Validation API
{code}
POST /metadata/policies/validate 
Accept-Type: application/json

{
   "name": "hdfsPolicy",
   "description": "hdfsPolicy",
   "inputStreams": [
      "HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX"
   ],
   "outputStreams": [
      "HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT"
   ],
   "definition": {
      "type": "siddhi",
      "value": "from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX#window.timeBatch(2 
min) select cmd, user, count() as total_count group by cmd,user insert into 
HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT "
   },
   "partitionSpec": [
      {
         "streamId": "hdfs_audit_log_enriched_stream",
         "type": "GROUPBY",
         "columns" : [
            "cmd"
         ]
      }
   ],
   "parallelismHint": 2
}

{code}

h2. Use Cases
* **parse**: So you could continuously call `parse` API aside during user keeps 
typing to verify the syntax and automatically generate input/output/partition , 
as it won't call back-end db, so will be very fast.
* **validate**: when user finishes defining policy, the api will validate the 
metadata end2end


> Refactor policy parser and validator for richer plan details and better 
> performance
> -----------------------------------------------------------------------------------
>
>                 Key: EAGLE-635
>                 URL: https://issues.apache.org/jira/browse/EAGLE-635
>             Project: Eagle
>          Issue Type: Improvement
>    Affects Versions: v0.5.0
>            Reporter: Hao Chen
>            Assignee: Hao Chen
>             Fix For: v0.5.0
>
>
> h2. Changes
> * Refactor policy parser and validator for richer plan details and better 
> performance
> * Decouple PolicyExecutionPlan and PolicyValidation
> h2. API
> * Parse API
> {code}
> POST /metadata/policies/parse 
> Accept-Type: text
> from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX#window.timeBatch(2 min) select 
> cmd, user, count() as total_count group by cmd,user insert into 
> HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT"
> {code}
> * Validation API
> {code}
> POST /metadata/policies/validate 
> Accept-Type: application/json
> {
>    "name": "hdfsPolicy",
>    "description": "hdfsPolicy",
>    "inputStreams": [
>       "HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX"
>    ],
>    "outputStreams": [
>       "HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT"
>    ],
>    "definition": {
>       "type": "siddhi",
>       "value": "from 
> HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX#window.timeBatch(2 min) select cmd, 
> user, count() as total_count group by cmd,user insert into 
> HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX_OUT "
>    },
>    "partitionSpec": [
>       {
>          "streamId": "hdfs_audit_log_enriched_stream",
>          "type": "GROUPBY",
>          "columns" : [
>             "cmd"
>          ]
>       }
>    ],
>    "parallelismHint": 2
> }
> {code}
> h2. Use Cases
> * *parse*: could continuously call `parse` API aside during user keeps typing 
> to verify the syntax and automatically generate input/output/partition , as 
> it won't call back-end db, so will be very fast.
> * *validate*: when user finishes defining policy, the api will validate the 
> metadata end2end



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to