I have been thinking through the process of evaluating the "goodness" of a
trading system using AB metrics and have become perplexed.  Can someone who
has unravelled this issue previously help?

There seem to be two general approaches to portfolio sizing while doing a
back-test.
  
The first is to only back-test using the "Initial Equity" amount.
Generally, we might start using fixed position sizes and a fixed maximum
number of positions. In later developmental iterations we might use risk
based position sizing or other processes where we vary position sizing up to
the maximum amount of Initial Equity.  I generally refer to this evaluation
approach as "Clamped Equity".  This approach tends to give an equity curve
that is linear.

The second approach is to compound profits and place trades up to "Current
Equity". (In AB terms our Position size is set to a % of Current Equity).
This is referred to as "Compounding Profits".  The equity curve can take on
an exponential appearance.

In real life trading most people tend to do a bit of both.  However in
back-testing mode the "Compounding Profits" model (with a notionally good
system) can quickly become infeasible.  (If only I had this system in
2000...).

So, now to the crux of the problem.  The "Clamped Equity" approach, with a
notionally good system, produces a profit that is quarantined.  Accumulated
profit can be used to top-up draw-downs but the amount in trades never
exceeds initial equity.  In AmiBroker metrics, Exposure % is always
calculated on a bar by bar basis of mark-to-market holding against current
mark-to-market equity.  However, in the "Clamped Equity" testing approach,
the quarantining of profits is intentional and it seems to me that it would
be more useful to look at the Exposure as a % of the "Clamped Equity" (i.e.
the "Initial Equity")?

Exposure% is also used as a divisor in other metrics such as Net Risk
Adjusted Return %,  Risk Adjusted Return %,  Max System % Draw-down,
CAR/MaxDD  and  RAR/MaxDD and so these metrics also may be less useful given
this testing approach.

I can see that comparisons between competing models, with the same test
period is valid.   However, I do not feel so secure if I am doing
Walk-Forward back-testing using a complex objective function, particularly
if I am using weighted components that contain Exposure% and others that
don't.

I know that it is relatively easy to use the Custom Back Tester to produce
amended statistics.  However, I am concerned that I have not found any other
discussions of this issue on this or other forums, so maybe I have muddled
thinking and it is not a real issue.   Any discussion would be appreciated.

Robert 

Reply via email to