[ 
https://issues.apache.org/jira/browse/EAGLE-438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414313#comment-15414313
 ] 

Peter Kim commented on EAGLE-438:
---------------------------------

Yes, in the config.json file that the script uses, the inputs will be an array 
of components as such

"inputs": [
      {
        "component": "namenode",
        "host": "127.0.0.1",
        "port": "50070",
        "https": false
      },
      {
        "component": "resourcemanager",
        "host": "127.0.0.1",
        "port": "8088",
        "https": false
      },
      {
        "component": "datanode",
        "host": "127.0.0.1",
        "port": "50075",
        "https": false
      }
   ]

For your other question, I could also make it so that different components can 
send it to different kafka topics. So make the kafka output area in the json 
file into an array

> Multiple Inputs for Hadoop JMX Collector Python Script
> ------------------------------------------------------
>
>                 Key: EAGLE-438
>                 URL: https://issues.apache.org/jira/browse/EAGLE-438
>             Project: Eagle
>          Issue Type: New Feature
>            Reporter: Peter Kim
>            Priority: Trivial
>              Labels: features
>
> It would be very useful for the hadoop jmx collector to be able to collect 
> jmx metrics for multiple components at once. So here, I wish to extend the 
> default python script that collects Hadoop JMX Metrics to support multiple 
> inputs as oppose to only one. So, with this, one can collect all jmx metrics 
> for multiple components such as namenode, datanode, resource manager, 
> hmaster, etc. all at once. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to