On Saturday, January 20, 2024 at 2:14:05 PM UTC-6 Edward K. Ream wrote:

>From Figure 3:

a, We first sample a large set of random theorem premises.

b, We use the symbolic deduction engine to obtain a deduction closure.
*This returns a directed acyclic graph of statements.*


I read the caption before the introduction, which provides better context:

QQQ
Next we use a symbolic deduction engine on the sampled premises.

The engine quickly deduces new true statements by following forward 
inference rules as shown in Fig. 3b. This returns a directed acyclic graph 
of all reachable conclusions.

Each node in the directed acyclic graph is a reachable conclusion, with 
edges connecting to its
parent nodes thanks to the traceback algorithm described in Methods. This 
allows a traceback process to run recursively starting from any node N, at 
the end returning its dependency subgraph G(N), with its
root being N and its leaves being a subset of the sampled premises.

Denoting this subset as P, we obtained a synthetic training example

(premises, conclusion, proof) = (P, N, G(N)).
QQQ

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/4990d7ff-46b1-4953-a16a-b69d10e3fa05n%40googlegroups.com.

Reply via email to