To start with, there are a bunch of things we are planning with tracing:
https://issues.apache.org/jira/browse/PHOENIX-1121

But to answer your questions,


> Can we use something like zipkin-htrace adapter for Phoenix traces? And if
> I did would the calls be coming from the RS?


Yes, we could roll in something like that as well, but there just isn't a
knob for it right now. You would need the same config on the server (all
the tracing config goes through the same interface though, so it should be
too hard). Right now, its all being written to an HBase table from both
sides of the request, so you could pull that in later to populate zipkin as
all.

We could also add a span receiver to write to zipkin. I'd be more inclinded
to write to zipkin from the phoenix table as thats more likely to be stable
storage. All the same information would be there, but I'd trust my HBase
tables :)

* How do you get the trace id on a query you create?


If there is something you are looking to trace, you could actually create a
trace before creating your phoenix request, and pull the traceID out of
there (you could also add any annotations you wanted, like the app server's
request id) Phoenix will either continue the trace, if one is started, or
start a new one, if configured to do so.

Starting a new one is generally just for introspection into a running
system to see how things are doing. It wouldn't be tied to anything in
particular. There is some pending work in the above mentioned JIRA for
adding tag (timeline annotations, in HTrace parlance)/annotations
(key-value annotations) to a phoenix request/connection, but you should be
able to do what you want just by starting the trace before making the
phoenix request. If phoenix is configured correctly, it should just work
with the rest of the phoenix trace sink infrastructure

 Do you have to load the DDL manually


Nope, its part of the PhoenixTableMetricsWriter, here
<https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/trace/PhoenixTableMetricsWriter.java#L142>.
When it receives a metric (really, just a conversion of a span to a Hadoop
metrics2 metric), it will create the table as needed.

Hope that helps!

-------------------
Jesse Yates
@jesse_yates
jyates.github.com


On Tue, Aug 26, 2014 at 7:21 PM, Dan Di Spaltro <dan.dispal...@gmail.com>
wrote:

> I've used the concept of tracing quite a bit in previous projects and I
> had a couple questions:
>
> * Can we use something like zipkin-htrace adapter for Phoenix traces? And
> if I did would the calls be coming from the RS?
> * How do you get the trace id on a query you create?  Generally I've used
> something where I can log back to a client a trace/span, and then go look
> through the queries to match up why something took so long, etc. I could be
> thinking about this wrong...
> * Do you have to load the DDL manually, nothing seems to auto-create it,
> no system table seems to be created outside of sequences and tables.  I
> have the default config files from Phoenix on the classpath.  I also have
> the compat and server jars on the CP.  Below are the log lines I see in the
> master and regionserver.
>   - I have set props.setProperty("phoenix.trace.frequency", "always") for
> every query.
>
> 2014-08-27 01:55:27,483 INFO  [main] trace.PhoenixMetricsSink: Writing
> tracing metrics to phoenix table
> 2014-08-27 01:55:27,484 INFO  [main] trace.PhoenixMetricsSink:
> Instantiating writer class:
> org.apache.phoenix.trace.PhoenixTableMetricsWriter
> 2014-08-27 01:55:27,490 INFO  [main] trace.PhoenixTableMetricsWriter:
> Phoenix tracing writer started
>
> Thanks for the help,
>
> -Dan
>
> --
> Dan Di Spaltro
>

Reply via email to