Thanks Eric. I can now see all the pieces tie together .. (at least to the extent that I thought in 3 days .. ). I have raised the bug regarding hicc binding to localhost.
A few more questions to wrap this up. 1. Nothing is written to Hadoop table in HBase. Is there something that I am missing ? I understand that ClusterSummary is populated ONLY after the pig job is successfully run (so, not seeing anything in ClusterSummary table is expected. 2. If I have only HbaseWriter for the collector, and if I have configured an agent to monitor a file, what will be written to HBase ? Is it possible to make use of HBase for custom data types like monitoring a custom log file (like /var/log/messages ) and provide visualization for such a monitoring ? 3. What is the difference between what is written to HBase versus what is written to HDFS ? I was wondering if I can have just one write to HBase (which intern can be configured to use HDFS as the persistent store .. ) and omit the SeqFileWriter ? 4. I observed a strange behavior in agents. I modified the initial_adaptors file to have only one agent to monitor a file. When I restarted the agent process and did a "list" [using telnet localhost 9093], I could see all the older agents running. Could this be a bug. Thanks and regards, DKN -- View this message in context: http://apache-chukwa.679492.n3.nabble.com/A-demo-setup-on-a-single-linux-server-tp3001627p3016124.html Sent from the Chukwa - Users mailing list archive at Nabble.com.
