LuKreme wrote:
On Mar 3, 2009, at 10:06, John Wilcock <j...@tradoc.fr> wrote:

Le 03/03/2009 17:42, Matus UHLAR - fantomas a écrit :
I have been already thinking about possibility to combine every two rules
and do a masscheck over them. Then, optionally repeating that again,
skipping duplicates. Finally gather all rules that scored>=0.5 ||<=-0.5
- we could have interesting ruleset here.

But that's going to be a HUGE ruleset.

Not to mention that different combinations will suit different sites.

I wonder about the feasibility of a second Bayesian database, using the same learning mechanism as the current system, but keeping track of rule combinations instead of keywords.

It sounds like a really good idea to me, and also like the most reasonable way to manage self-learning meta rules.

It seems to me that the consensus is that it's worth a try. I don't know if it will work or not but I think there's a good change this could be a significant advancement in how well SA works.

Reply via email to