On Sat, Feb 9, 2019 at 4:22 AM Ben Goertzel <b...@goertzel.org> wrote:

>
> We are now playing with hybridizing these symbolic-ish grammar
> induction methods with neural net language models, basically using the
> predictive models produced by models in the BERT lineage (but more
> sophisticated than vanilla BERT) in place of simple mutual information
> values to produce more broadly-context-sensitive parse choices in
> Linas's MST parser...
>

This last sentence suggests that the near-total confusion about MST
continues to persist in the team. I keep telling them to collect the
statistics, and then discard the MST parse **immediately**. Trying to
"improve" MST is a total waste of time.

Seriously: Instead, try skipping the MST step entirely.  Just do not even
do it, AT ALL. Rip it out. It is NOT a step that the algorithm even needs.
I'll bet you that if you skip the MST step completely, the quality of your
results will be more-or-less unchanged.  The results might even get
better!

If your results don't change, by skipping MST, or if your results get
better, by skipping MST, then that should be a clear indicator that trying
to "improve" MST is a waste of time!

-- Linas

-- 
cassette tapes - analog TV - film cameras - you

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M843d12260f98baeb7e8413e0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to