Quick update, I was able to connect with the phoenix 4.2.2 client and I did
get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME =
'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about querying
through the REST API.

Is there any way to get a custom metric added to the main page of Ambari?
or does this require development work?

Thanks,

Bryan

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <[email protected]> wrote:

> Hi Sid,
>
> Thanks for the suggestions. I turned on DEBUG for the metrics collector
> (had to do this through the Ambari UI configs section) and now I can see
> some activity... When I post a metric I see:
>
> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> TimelineWebServices:270 - Storing metrics: {
>
>   "metrics" : [ {
>
>     "timestamp" : 1432075898000,
>
>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>
>     "appid" : "amssmoketestfake",
>
>     "hostname" : "localhost",
>
>     "starttime" : 1432075898000,
>
>     "metrics" : {
>
>       "1432075898000" : 0.963781711428,
>
>       "1432075899000" : 1.432075898E12
>
>     }
>
>   } ]
>
> }
>
> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> DefaultPhoenixDataSource:67 - Metric store connection url:
> jdbc:phoenix:localhost:61181:/hbase
>
> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values
> of total size 925 bytes
>
> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> MutationState:436 - Total time for batch call of  2 mutations into
> METRIC_RECORD: 3 ms
>
> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> log:40 - RESPONSE /ws/v1/timeline/metrics  200
>
>
> So it looks like it posted successfully. Then I hit:
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>
> and I see...
>
> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> ParallelIterators:412 - Guideposts: ]
>
> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> ParallelIterators:481 - The parallelScans:
> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>
> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>
> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
> d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>
> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>
> I'll see if I can get the phoenix client working and see what that returns.
>
> Thanks,
>
> Bryan
>
> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <[email protected]>
> wrote:
>
>>  Hi Bryan,
>>
>>
>>  Few things you can do:
>>
>>
>>  1. Turn on DEBUG mode by changing log4j.properties at,
>> /etc/ambari-metrics-collector/conf/
>>
>> This might reveal more info, I don't think we print every metrics
>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>> enabled to trunk recently.
>>
>>
>>  2. Connect using Phoenix directly and you can do a SELECT query like
>> this:
>>
>> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>'
>> order by SERVER_TIME desc limit 10;
>>
>>
>>  Instructions for connecting to Phoenix:
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>
>>
>>  3. What API call are you making to get metrics?
>>
>> E.g.: http://
>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>
>>
>>  -Sid
>>
>>
>>  ------------------------------
>> *From:* Bryan Bende <[email protected]>
>> *Sent:* Friday, July 24, 2015 2:03 PM
>> *To:* [email protected]
>> *Subject:* Posting Metrics to Ambari
>>
>>  I'm interested in sending metrics to Ambari and I've been looking at
>> the Metrics Collector REST API described here:
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>
>>  I figured the easiest way to test it would be to get the latest HDP
>> Sandbox... so I downloaded and started it up. The Metrics Collector service
>> wasn't running so I started it, and also added port 6188 to the VM port
>> forwarding. From there I used the example POST on the Wiki page and made a
>> successful POST which got a 200 response. After that I tried the query, but
>> could never get any results to come back.
>>
>>  I know this list is not specific to HDP, but I was wondering if anyone
>> has any suggestions as to what I can look at to figure out what is
>> happening with the data I am posting.
>>
>>  I was watching the metrics collector log while posting and querying and
>> didn't see any activity besides the periodic aggregation.
>>
>>  Any suggestions would be greatly appreciated.
>>
>>  Thanks,
>>
>>  Bran
>>
>
>

Reply via email to