Hi,
On Aug 16, 2007, at 4:20 AM, dimitri papadimitriou wrote:
hi j-p
as far as i remember, this doc. is still open for discussion from
San Diego and Prague mtg discussion.
An ID is always open for discussion as long as it is not an RFC.
here below for the record
<http://www3.ietf.org/proceedings/06nov/minutes/pce.txt>
v03 has been reworked but does not provide answer to the concerns
expressed so far - quoting the doc.
> In PCE-based environments, it is critical to monitor the state of
the
critical for what ? if computation time is an issue why delegate it
(isn't that the safest assumption ?)
Looking like you are quibbling on words here ... Do want to replace
"critical" by "useful",
is that your point ?
> path computation chain for troubeshooting and performance monitoring
> purposes:
troubleshooting of what ?
I think that I gave this explanation a few times already ... Suppose
that the head-end
experiences long response times. I think that we can agree on the
fact that this is
potentially an issue., in which case the user may want to
troubleshoot by issuing
such a request in order to determine the location (which PCE of the
chain) and the
root cause.
if there is congestion/troubles how would you ensure the
information received back is accurate ?
The information of the waiting times, CPU utilization, ... is
provided by the PCE.
The issue of accurateness is no different than for any other
information provided
for example by the MIB.
> liveness of each element (PCE) involved in the PCE chain,
if i well remember the PCE is a client-server model (fundamental
assumption about the PCE approach) hence, why the client needs to
know the "chain" of PCE servers ?
Strange question ... Try to operate a network and you'll immediately
figure out.
Back to the previous case, consider the case of an inter-domain TE
deployment
where the number of PCE chain may potentially be large. Isn't useful
during a
troubleshooting event for example to know the set of PCEs involved in
a PCE chain ?
(or to check a particular PCE chain).
> detection of potential resource contention states
a t[0] contention, at t[1] message for perf.mon -> are running
conditions identical ? since probabilisticaly, not what is the
expectation behind this mechanism ?
would it be possible to have a "curve" of the deterioration of the
performance as with an increasing number of computed path the
number of "monitoring messages" will also increase ?
Sure but ... this is true for all OAM tools. If you use a ping to
locate a congestion spot
at the time you get your reply the network state may have changed 10
times ... but if
you use it because of a sustained congestion state then the ping may
help you to
locate the problem. The exact same reasoning applies here. Note that
you can also
retrieve historical data computed over a period of time, this is a
matter of local configuration
on the PCE, in which case you could retrieve the averaged (using for
example a low pass
filter) computation time.
> statistics in term of path computation times are examples of such
> metrics of interest.
interest to who and for which purpose (detection is fine but what
is the issue to be solved) ?
Again sorry ... but strange question ... Interest to the user of
course. Before fixing a problem
it is usually useful to locate the root cause of the problem. This
tool may help for that purpose.
like any system, PCE requires suitable planning and dimensioning
wrt to perf.objectives i have impression that these fundamental
design steps are skipped.
let's start discussion with this.
side note: the document states "In this document we call a "state
metric" a metric that characterizes a PCE state" -> need to define
the latter.
Sure.
JP.
thanks,
-d.
JP Vasseur wrote:
Hi,
Just let you know about an IPR disclosure that has been filed with
the IETF in relation to http://tools.ietf.org/id/draft-vasseur-pce-
monitoring-03.txt. You can see the disclosure at https://
datatracker.ietf.org/ipr/872/ and see the terms offered by the IPR
claimant.
Thanks,
JP.
_______________________________________________
Pce mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/pce
.
_______________________________________________
Pce mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/pce