>>>>> Mathew, Cherry G * <c...@bow.st> writes:

[...]


    > With that thought, I take leave for the weekend, and hope to put
    > my money where my mouth is, by debugging the current C
    > implementation using ideas described above.

    > $ make prog $ ./arc

    > should give you a crash dump for gdb, which, I'm getting a SEG
    > fault at delLRU() well into the caching input trace (T1 has 59 out
    > of 64 entries filled!)


Hello again,

Reporting back after a really interesting weekend. Highlights are:

- The "D3" methodology seems to work as advertised - the system found a
  little sneaky "ordering" bug related to "list"->qcount inc/dec and
  TAILQ() ops.
- the segfault turned out to be namespace collision related
  complications unrelated to the ARC algo.

I wrote up an "Equivalence Verification" subdirectory (see arc_queue/
below) - and moved around a few files. Other than these, everything
shoud be self explanatory.

A few things remain WIP. Obviously, this is a "toy" model - it has stack
based "memory" management, and a directory buffer size of 64. So that
needs to be hammered on a bit more. Further, I'm keen to now dip my toes
into re-entrancy. If you noticed in the earlier patches, there were
model routines for "mutex_enter()/mutex_exit()". I'll re-look these and
play around a little bit with concurrent entry of ARC() (ARC() is now
strictly entered sequentially in a loop - see arc.drv). Once things
settle down, I will look at making an actual block disk driver with an
ARC(9) managed cache using this code.

Another area of further work would be the *.inv files, where the
invariant specifications are made.

I also noticed that the arc input trace makes the extracted and
hand-coded models exlclude slightly different looking codepaths. I need
to review these to understand how/what is different. Finally, need to
update the input trace itself to exercise all possible codepaths in
both.

TLDR, "I want to try this":

Pre-requisites: installed spin and modex in $PATH

$ make spin-gen spin-run;make clean;make modex-gen spin-run;make clean;make 
prog;./arc;make clean

Ditto above after:

$ cd arc_queue

If you want to inspect the generated / hand coded spinmodel.pml do so
after the respective '*-gen' targets run above - they create and
overwrite the pre-existing spinmodel.pml in $PWD

As always, looking forward to bricks and bats!

Many Thanks.
-- 
~cherry

diff -urN arc.null/arc.c arc/arc.c
--- arc.null/arc.c      1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.c   2023-09-14 13:59:20.607107308 +0000
@@ -0,0 +1,173 @@
+/* C Implementation of the Adaptive Replacement Cache algorithm. Written by 
cherry */
+
+/*
+ * We implement the following algorithm from page 10, Figure 4.
+ * 
https://www.usenix.org/legacy/events/fast03/tech/full_papers/megiddo/megiddo.pdf
+ *
+ *
+ *  ARC(c)
+ *  
+ *  INPUT: The request stream x1,x2,....,xt,....
+ *  INITIALIZATION: Set p = 0 and set the LRU lists T1, B1, T2, and B2 to 
empty.
+ *  
+ *  For every t>=1 and any xt, one and only one of the following four cases 
must occur.
+ *  Case I: xt is in T1 or T2. A cache hit has occurred in ARC(c) and DBL(2c).
+ *       Move xt to MRU position in T2.
+ *  
+ *  Case II: xt is in B1. A cache miss (resp. hit) has occurred in ARC(c) 
(resp. DBL(2c)).
+ *              ADAPTATION: Update p = min { p + d1,c }
+ *                  where d1 = { 1 if |B1| >= |B2|, |B2|/|B1| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B1 to the MRU position in T2 (also fetch 
xt to the cache).
+ *  
+ *  Case III: xt is in B2. A cache miss (resp. hit) has occurred in ARC(c) 
(resp. DBL(2c)).
+ *              ADAPTATION: Update p = max { p - d2,0 }
+ *                  where d2 = { 1 if |B2| >= |B1|, |B1|/|B2| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B2 to the MRU position in T2 (also fetch 
xt to the cache).
+ *       
+ *  Case IV: xt is not in T1 U B1 U T2 U B2. A cache miss has occurred in 
ARC(c) and DBL(2c).
+ *       Case A: L1 = T1 U B1 has exactly c pages.
+ *               If (|T1| < c)
+ *                     Delete LRU page in B1. REPLACE(xt,p).
+ *               else
+ *                     Here B1 is empty. Delete LRU page in T1 (also remove it 
from the cache).
+ *               endif
+ *       Case B: L1 = T1 U B1 has less than c pages.
+ *               If (|T1| + |T2| + |B1| + |B2| >= c)
+ *                  Delete LRU page in B2, if (|T1| + |T2| + |B1| + |B2| = 2c).
+ *                  REPLACE(xt, p).
+ *               endif
+ *  
+ *       Finally, fetch xt to the cache and move it to MRU position in T1.
+ *  
+ *  Subroutine REPLACE(xt,p)
+ *       If ( (|T1| is not empty) and ((|T1| exceeds the target p) or (xt is 
in B2 and |T1| = p)) )
+ *               Delete the LRU page in T1 (also remove it from the cache), 
and move it to MRU position in B1.
+ *       else
+ *               Delete the LRU page in T2 (also remove it from the cache), 
and move it to MRU position in B2.
+ *       endif
+ */
+ 
+#include "arc_queue/arc.h"
+
+static void arc_list_init(struct arc_list *_arc_list)
+{
+       TAILQ_INIT(&_arc_list->qhead);
+       _arc_list->qcount = 0;
+       
+       int i;
+       for(i = 0; i < ARCLEN; i++) {
+               init_arc_item(&_arc_list->item_list[i], IID_INVAL, false);
+       };
+}
+
+int p, d1, d2;
+struct arc_list _B1, *B1 = &_B1, _B2, *B2 = &_B2, _T1, *T1 = &_T1, _T2, *T2 = 
&_T2;
+
+void arc_init(void)
+{
+       p = d1 = d2 = 0;
+
+       arc_list_init(B1);
+       arc_list_init(B2);
+       arc_list_init(T1);
+       arc_list_init(T2);
+}
+
+struct arc_item _LRUitem, *LRUitem = &_LRUitem;
+
+static void
+REPLACE(struct arc_item *x_t, int p)
+{
+
+
+       init_arc_item(LRUitem, IID_INVAL, false);
+
+       if ((lengthof(T1) != 0) &&
+           ((lengthof(T1) >  p) ||
+            (memberof(B2, x_t) && (lengthof(T1) == p)))) {
+               readLRU(T1, LRUitem);
+               delLRU(T1);
+               cacheremove(LRUitem);
+               addMRU(B1, LRUitem);
+       } else {
+               readLRU(T2, LRUitem);
+               delLRU(T2);
+               cacheremove(LRUitem);
+               addMRU(B2, LRUitem);
+       }
+}
+
+void
+ARC(struct arc_item *x_t)
+{
+       if (memberof(T1, x_t)) { /* Case I */
+               delitem(T1, x_t);
+               addMRU(T2, x_t);
+       }
+
+       if (memberof(T2, x_t)) { /* Case I */
+               delitem(T2, x_t);
+               addMRU(T2, x_t);
+       }
+
+       if (memberof(B1, x_t)) { /* Case II */
+               d1 = ((lengthof(B1) >= lengthof(B2)) ? 1 : 
(lengthof(B2)/lengthof(B1)));
+               p = min((p + d1), C);
+
+               REPLACE(x_t, p);
+
+               delitem(B1, x_t);
+               addMRU(T2, x_t);
+               cachefetch(x_t);
+       }
+
+       if (memberof(B2, x_t)) { /* Case III */
+               d2 = ((lengthof(B2) >= lengthof(B1)) ? 1 : 
(lengthof(B1)/lengthof(B2)));
+               p = max((p - d2), 0);
+
+               REPLACE(x_t, p);
+
+               delitem(B2, x_t);
+               addMRU(T2, x_t);
+               cachefetch(x_t);
+       }
+
+       if (!(memberof(T1, x_t) ||
+             memberof(B1, x_t) ||
+             memberof(T2, x_t) ||
+             memberof(B2, x_t))) { /* Case IV */
+
+               if ((lengthof(T1) + lengthof(B1)) == C) { /* Case A */
+                       if (lengthof(T1) < C) {
+                               delLRU(B1);
+                               REPLACE(x_t, p);
+                       } else {
+                               assert(lengthof(B1) == 0);
+                               readLRU(T1, LRUitem);
+                               delLRU(T1);
+                               cacheremove(LRUitem);
+                       }
+               }
+
+               if ((lengthof(T1) + lengthof(B1)) < C) {
+                       if ((lengthof(T1) +
+                            lengthof(T2) +
+                            lengthof(B1) +
+                            lengthof(B2)) >= C) {
+                               if ((lengthof(T1) +
+                                    lengthof(T2) +
+                                    lengthof(B1) +
+                                    lengthof(B2)) == (2 * C)) {
+
+                                       delLRU(B2);
+                               }
+                               
+                               REPLACE(x_t, p);
+                       }
+               }
+               cachefetch(x_t);
+               addMRU(T1, x_t);
+       }
+}
diff -urN arc.null/arc.drv arc/arc.drv
--- arc.null/arc.drv    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.drv 2023-09-07 16:43:18.906339006 +0000
@@ -0,0 +1,52 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by 
cherry */
+
+/* Note: What we're attempting in this driver file, is to generate an
+ * input trace that would exercise all code-paths of the model specified
+ * in arc.pml
+ *
+ * Feeding a static trace to the algorithm in array _x[N_ITEMS] is a
+ * acceptable alternative.
+ */
+/* #include "arc.pmh" See Makefile , spinmodel.pml */
+
+#define N_ITEMS (2 * C) /* Number of distinct cache items to test with */
+#define ITEM_REPS (C / 4) /* Max repeat item requests */
+#define N_ITERATIONS 3
+
+hidden arc_item _x[N_ITEMS]; /* Input state is irrelevant from a verification 
PoV */
+hidden int _x_iid = 0;
+hidden int _item_rep = 0;
+hidden int _iterations = 0;
+
+/* Drive the procs */
+init {
+
+       atomic {
+              do
+              ::
+              _iterations < N_ITERATIONS ->
+              
+                       _x_iid = 0;
+                       do
+                       :: _x_iid < N_ITEMS ->
+                                  init_arc_item(_x[_x_iid], _x_iid, false);
+                                  _item_rep = 0;
+                                  do
+                                  :: _item_rep < (_x_iid % ITEM_REPS) ->
+                                               ARC(_x[_x_iid]);
+                                               _item_rep++;
+                                  :: _item_rep >= (_x_iid % ITEM_REPS) ->
+                                               break;
+                                  od
+                                  _x_iid++;
+                       :: _x_iid >= N_ITEMS ->
+                               break;
+                       od
+                       _iterations++;
+               ::
+               _iterations >= N_ITERATIONS ->
+                           break;
+               od
+       }
+
+}
diff -urN arc.null/arc_drv.c arc/arc_drv.c
--- arc.null/arc_drv.c  1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_drv.c       2023-09-14 14:01:04.826170703 +0000
@@ -0,0 +1,35 @@
+/* See arc.drv for design details */
+
+#include "arc_queue/arc.h"
+
+#define N_ITEMS (2 * C) /* Number of distinct cache items to test with */
+#define ITEM_REPS (C / 4) /* Max repeat item requests */
+#define N_ITERATIONS 3
+
+static struct arc_item _x[N_ITEMS]; /* Input state is irrelevant from a 
verification PoV */
+static int _x_iid = 0;
+static int _item_rep = 0;
+static int _iterations = 0;
+
+/* Drive ARC() with a preset input trace */
+
+void
+main(void)
+{
+       arc_init(); /* Init module state */
+
+       while (_iterations < N_ITERATIONS) {
+               _x_iid = 0;
+               while (_x_iid < N_ITEMS) {
+                       init_arc_item(&_x[_x_iid], _x_iid, false);
+                       _item_rep = 0;
+                       while(_item_rep < (_x_iid % ITEM_REPS) ) {
+                               ARC(&_x[_x_iid]);
+                               _item_rep++;
+                       } 
+                       _x_iid++;
+               }
+               _iterations++;
+       }
+}
+
diff -urN arc.null/arc.inv arc/arc.inv
--- arc.null/arc.inv    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.inv 2023-09-14 08:39:49.817804660 +0000
@@ -0,0 +1,59 @@
+/* $NetBSD$ */
+
+/* These are Linear Temporal Logic invariants (and constraints)
+ * applied over the statespace created by the promela
+ * specification. Correctness is implied by Logical consistency.
+ */
+ltl 
+{
+       /* Liveness - all threads, except control must finally exit */
+       eventually always (_nr_pr == 1) &&
+       /* c.f Section I. B, on page 3 of paper */
+       always ((lengthof(T1) +
+                lengthof(B1) +
+                lengthof(T2) +
+                lengthof(B2)) <= (2 * C)) &&
+
+       /* Reading together Section III. A., on page 7, and
+        * Section III. B., on pages  7,8
+        */
+       always ((lengthof(T1) + lengthof(B1)) <= C) &&
+       always ((lengthof(T2) + lengthof(B2)) < (2 * C)) &&
+
+       /* Section III. B, Remark III.1 */
+       always ((lengthof(T1) + lengthof(T2)) <= C) &&
+
+       /* TODO: III B, A.1 */
+
+       /* III B, A.2 */
+       always (((lengthof(T1) +
+                 lengthof(B1) +
+                 lengthof(T2) +
+                 lengthof(B2)) < C)
+                implies ((lengthof(B1) == 0) &&
+                          lengthof(B2) == 0)) &&
+
+       /* III B, A.3 */
+       always (((lengthof(T1) +
+                 lengthof(B1) +
+                 lengthof(T2) +
+                 lengthof(B2)) >= C)
+                implies ((lengthof(T1) +
+                          lengthof(T2) == C))) &&
+
+       /* TODO: III B, A.4 */
+
+       /* TODO: III B, A.5 */
+
+       /* IV A. */
+       always (p <= C) &&
+
+       /* Not strictly true, but these force us to generate a "good"
+        * input trace via arc.drv
+        */
+
+       eventually /* always ? */ ((lengthof(T1) == p) && lengthof(T2) == (C - 
p)) &&
+
+       eventually (p > 0)
+               
+}
diff -urN arc.null/arc.pml arc/arc.pml
--- arc.null/arc.pml    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.pml 2023-09-14 07:06:50.440288307 +0000
@@ -0,0 +1,212 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by 
cherry */
+
+/*
+ * We implement the following algorithm from page 10, Figure 4.
+ * 
https://www.usenix.org/legacy/events/fast03/tech/full_papers/megiddo/megiddo.pdf
+ *
+ *
+ *  ARC(c)
+ *  
+ *  INPUT: The request stream x1,x2,....,xt,....
+ *  INITIALIZATION: Set p = 0 and set the LRU lists T1, B1, T2, and B2 to 
empty.
+ *  
+ *  For every t>=1 and any xt, one and only one of the following four cases 
must occur.
+ *  Case I: xt is in T1 or T2. A cache hit has occurred in ARC(c) and DBL(2c).
+ *       Move xt to MRU position in T2.
+ *  
+ *  Case II: xt is in B1. A cache miss (resp. hit) has occurred in ARC(c) 
(resp. DBL(2c)).
+ *              ADAPTATION: Update p = min { p + d1,c }
+ *                  where d1 = { 1 if |B1| >= |B2|, |B2|/|B1| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B1 to the MRU position in T2 (also fetch 
xt to the cache).
+ *  
+ *  Case III: xt is in B2. A cache miss (resp. hit) has occurred in ARC(c) 
(resp. DBL(2c)).
+ *              ADAPTATION: Update p = max { p - d2,0 }
+ *                  where d2 = { 1 if |B2| >= |B1|, |B1|/|B2| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B2 to the MRU position in T2 (also fetch 
xt to the cache).
+ *       
+ *  Case IV: xt is not in T1 U B1 U T2 U B2. A cache miss has occurred in 
ARC(c) and DBL(2c).
+ *       Case A: L1 = T1 U B1 has exactly c pages.
+ *               If (|T1| < c)
+ *                     Delete LRU page in B1. REPLACE(xt,p).
+ *               else
+ *                     Here B1 is empty. Delete LRU page in T1 (also remove it 
from the cache).
+ *               endif
+ *       Case B: L1 = T1 U B1 has less than c pages.
+ *               If (|T1| + |T2| + |B1| + |B2| >= c)
+ *                  Delete LRU page in B2, if (|T1| + |T2| + |B1| + |B2| = 2c).
+ *                  REPLACE(xt, p).
+ *               endif
+ *  
+ *       Finally, fetch xt to the cache and move it to MRU position in T1.
+ *  
+ *  Subroutine REPLACE(xt,p)
+ *       If ( (|T1| is not empty) and ((|T1| exceeds the target p) or (xt is 
in B2 and |T1| = p)) )
+ *               Delete the LRU page in T1 (also remove it from the cache), 
and move it to MRU position in B1.
+ *       else
+ *               Delete the LRU page in T2 (also remove it from the cache), 
and move it to MRU position in B2.
+ *       endif
+ */
+ 
+/* Temp variable to hold LRU item */
+hidden arc_item LRUitem;
+
+/* Adaptation "delta" variables */
+int d1, d2;
+int p = 0;
+
+/* Declare arc lists - "shadow/ghost cache directories" */
+arc_list B1, B2, T1, T2;
+
+inline REPLACE(/* arc_item */ x_t, /* int */ p)
+{
+       /*
+        * Since LRUitem is declared in scope p_ARC, we expect it to be only 
accessible from there and REPLACE()
+        * as REPLACE() is only expected to be called from p_ARC.
+        * XXX: May need to revisit due to Modex related limitations.
+        */
+       init_arc_item(LRUitem, IID_INVAL, false);
+       
+       if
+               ::
+               (lengthof(T1) != 0) &&
+               ((lengthof(T1) > p) || (memberof(B2, x_t) && (lengthof(T1) == 
p)))
+               ->
+               {
+                      readLRU(T1, LRUitem);
+                      delLRU(T1);
+                      cacheremove(LRUitem);
+                      addMRU(B1, LRUitem);
+               }
+
+               ::
+               else
+               ->
+               {
+                      readLRU(T2, LRUitem);
+                      delLRU(T2);
+                      cacheremove(LRUitem);
+                      addMRU(B2, LRUitem);
+               }
+       fi
+}
+
+inline ARC(/* arc_item */ x_t)
+{
+       if
+               :: /* Case I */
+               memberof(T1, x_t)
+               ->
+               {
+                      delitem(T1, x_t);
+                      addMRU(T2, x_t);
+               }
+               :: /* Case I */
+               memberof(T2, x_t)
+               ->
+               {
+                      delitem(T2, x_t);
+                      addMRU(T2, x_t);
+               }
+               :: /* Case II */
+               memberof(B1, x_t)
+               ->
+               d1 = ((lengthof(B1) >= lengthof(B2)) -> 1 : 
(lengthof(B2)/lengthof(B1)));
+               p = min((p + d1), C);
+
+               REPLACE(x_t, p);
+               {
+                      delitem(B1, x_t);
+                      addMRU(T2, x_t);
+                      cachefetch(x_t);
+               }
+               :: /* Case III */
+               memberof(B2, x_t)
+               ->
+               d2 = ((lengthof(B2) >= lengthof(B1)) -> 1 : 
(lengthof(B1)/lengthof(B2)));
+               p = max(p - d2, 0);
+               
+               REPLACE(x_t, p);
+               {
+                      delitem(B2, x_t);
+                      addMRU(T2, x_t);
+                      cachefetch(x_t);
+               }
+               :: /* Case IV */
+               !(memberof(T1, x_t) ||
+                 memberof(B1, x_t) ||
+                 memberof(T2, x_t) ||
+                 memberof(B2, x_t))
+               ->
+               if
+                       :: /* Case A */
+                       ((lengthof(T1) + lengthof(B1)) == C)
+                       ->
+                       if
+                               ::
+                               (lengthof(T1) < C)
+                               ->
+                               delLRU(B1);
+                               REPLACE(x_t, p);
+                               ::
+                               else
+                               ->
+                               assert(lengthof(B1) == 0);
+                               {
+                                      readLRU(T1, LRUitem);
+                                      delLRU(T1);
+                                      cacheremove(LRUitem);
+                               }
+                       fi
+                       :: /* Case B */
+                       ((lengthof(T1) + lengthof(B1)) < C)
+                       ->
+                       if
+                               ::
+                               ((lengthof(T1) +
+                                 lengthof(T2) +
+                                 lengthof(B1) +
+                                 lengthof(B2)) >= C)
+                               ->
+                               if
+                                       ::
+                                       ((lengthof(T1) +
+                                         lengthof(T2) +
+                                         lengthof(B1) +
+                                         lengthof(B2)) == (2 * C))
+                                       ->
+                                       delLRU(B2);
+                                       ::
+                                       else
+                                       ->
+                                       skip;
+                               fi
+                               REPLACE(x_t, p);
+                               ::
+                               else
+                               ->
+                               skip;
+                       fi
+                       ::
+                       else
+                       ->
+                       skip;
+               fi
+               cachefetch(x_t);
+               addMRU(T1, x_t);
+       fi
+
+}
+
+#if 0 /* Resolve this after modex extract foo */
+proctype p_arc(arc_item x_t)
+{
+       /* Serialise entry */   
+       mutex_enter(sc_lock);
+
+       ARC(x_t);
+
+       mutex_exit(sc_lock);
+}
+#endif
diff -urN arc.null/arc.prx arc/arc.prx
--- arc.null/arc.prx    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.prx 2023-09-14 07:06:38.900036315 +0000
@@ -0,0 +1,80 @@
+// Spin model extractor harness written by cherry
+//
+%F arc.c
+%X -n REPLACE
+%X -n ARC
+%H
+// Disable effects of all included files and try to implement a subset of the 
APIs they provide.
+#define _ARC_H_
+%%
+//%C  // c_code {}
+//%%
+//%D // c_cdecl {}
+//%%
+%L
+// We use spin primitives and data objects.
+// See %P Below
+NonState       hidden  _LRUitem
+NonState       hidden  LRUitem
+NonState       hidden  _B2
+NonState       hidden  B2
+NonState       hidden  _B1
+NonState       hidden  B1
+NonState       hidden  _T2
+NonState       hidden  T2
+NonState       hidden  _T1
+NonState       hidden  T1
+NonState       hidden  x_t
+
+
+
+assert(...             keep
+REPLACE(...            keep
+init_arc_item(...      keep
+lengthof(...           keep
+memberof(...           keep
+addMRU(...             keep
+readLRU(...            keep
+delLRU(...             keep
+delitem(...            keep
+cacheremove(...                keep
+cachefetch(...         keep
+
+
+Substitute             c_expr { 
((lengthof(T1)!=0)&&((lengthof(T1)>now.p)||(memberof(B2,x_t)&&(lengthof(T1)==now.p))))
 }       (lengthof(T1) != 0) && ((lengthof(T1) > p) || (memberof(B2, x_t) && 
(lengthof(T1) == p)))
+Substitute             c_code { now.d1=((lengthof(B1)>=lengthof(B2)) ? Int 1\n 
: (lengthof(B2)/lengthof(B1))); }       d1 = ((lengthof(B1) >= lengthof(B2)) -> 
1 : (lengthof(B2)/lengthof(B1)))
+Substitute             c_code { now.p=min((now.p+now.d1),C); }  p = min((p + 
d1), C)
+
+Substitute             c_code { now.d2=((lengthof(B2)>=lengthof(B1)) ? Int 1\n 
: (lengthof(B1)/lengthof(B2))); }       d2 = ((lengthof(B2) >= lengthof(B1)) -> 
1 : (lengthof(B1)/lengthof(B2)));
+Substitute             c_code { now.p=max((now.p-now.d2),0); }                 
  p = max(p - d2, 0);
+Substitute             c_expr { 
(!(((memberof(T1,x_t)||memberof(B1,x_t))||memberof(T2,x_t))||memberof(B2,x_t))) 
}                      !(memberof(T1, x_t) || memberof(B1, x_t) || memberof(T2, 
x_t) ||  memberof(B2, x_t))
+Substitute             c_expr { ((lengthof(T1)+lengthof(B1))==C) }     
((lengthof(T1) + lengthof(B1)) == C)
+Substitute             c_expr { (lengthof(T1)<C) }     (lengthof(T1) < C)
+Substitute             c_expr { ((lengthof(T1)+lengthof(B1))<C) }      
((lengthof(T1) + lengthof(B1)) < C)
+Substitute             c_expr { 
((((lengthof(T1)+lengthof(T2))+lengthof(B1))+lengthof(B2))>=C) }       
((lengthof(T1) + lengthof(T2) + lengthof(B1) + lengthof(B2)) >= C)
+Substitute             c_expr { 
((((lengthof(T1)+lengthof(T2))+lengthof(B1))+lengthof(B2))==(2*C)) }   
((lengthof(T1) + lengthof(T2) + lengthof(B1) + lengthof(B2)) == (2 * C))
+%%
+
+%P
+
+/* Temp variable to hold LRU item */
+hidden arc_item LRUitem;
+
+arc_list B1, B2, T1, T2;
+
+#define p_REPLACE(_arg1, _arg2) REPLACE(_arg1, _arg2) /* Demo arbitrary 
Cfunc->PMLproc transformation */
+inline p_REPLACE(/* arc_item */ x_t, /* int */ p)
+{
+
+#include "_modex_REPLACE.pml"
+
+}
+
+#define p_ARC(_arg1) ARC(_arg1)
+inline p_ARC(/* arc_item */ x_t)
+{
+
+#include "_modex_ARC.pml"
+
+}
+%%
\ No newline at end of file
diff -urN arc.null/arc_queue/arc.h arc/arc_queue/arc.h
--- arc.null/arc_queue/arc.h    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc.h 2023-09-14 13:58:22.433238237 +0000
@@ -0,0 +1,170 @@
+/*
+ * The objective of the header here is to provide a set of macros that
+ * reflect the interfaces designed in arc.pmh
+ */
+
+#ifndef _ARC_H_
+#define _ARC_H_
+
+#ifdef MODEX
+/* Glue for model extraction run */
+#else
+/* Defaults to POSIX */
+#include <assert.h>
+#include <stddef.h>
+#include <stdbool.h>
+#endif
+
+#include "queue.h" /* We use the NetBSD version as it has no
+                   * dependencies (except for -DNULL) . */
+
+#define C 64
+
+#define ARCLEN (2 * C) /* c.f ghost cache directory length constraints in 
arc.inv */
+
+#define IID_INVAL -1
+
+struct arc_item {
+       TAILQ_ENTRY(arc_item) qlink;    
+       int iid;        /* Unique identifier for item */
+       bool cached;
+};
+
+struct arc_list {
+       TAILQ_HEAD(arc_qhead, arc_item) qhead;
+       int qcount;
+       struct arc_item item_list[ARCLEN]; /* We use static memory for demo 
purposes */
+};
+
+inline static struct arc_item * allocmember(struct arc_list *);
+inline static void freemember(struct arc_item *);
+inline static struct arc_item * findmember(struct arc_list *, struct arc_item 
*);
+
+#define init_arc_item(/* &struct arc_item [] */ _arc_item_addr,                
        \
+                     /* int */_iid, /*bool*/_cached)   do {                    
\
+               struct arc_item *_arc_item = _arc_item_addr;                    
\
+               assert(_arc_item != NULL);                              \
+               _arc_item->iid = _iid;                                          
\
+               _arc_item->cached = _cached;                                    
\
+       } while (/*CONSTCOND*/0)
+
+#define lengthof(/* struct arc_list* */_arc_list) (_arc_list->qcount)
+#define memberof(/* struct arc_list* */_arc_list,                              
\
+                /* struct arc_item* */_arc_item)                               
\
+       ((findmember(_arc_list,                                                 
\
+                    _arc_item) != TAILQ_END(&_arc_list->qhead)) ?              
\
+        true : false)
+
+/*
+ * We follow spin's channel rx/tx semantics here: "send" means
+ * duplicate onto queue ("_arc_list.item_list!_arc_item.iid"), and
+ * recieve means "duplicate" from queue but leave the data source on
+ * queue  ("_arc_list.item_list?<_arc_item.iid>").
+ *
+ * It is an error to addMRU() on a full queue. Likewise, it is an
+ * error to readLRU() on an empty queue. The verifier is expected to
+ * have covered any case where these happen. We use assert()s to
+ * indicate the error.
+ *
+ * Note: We use spin's channel mechanism in our design, only because
+ * it's the easiest. We could have chosen another
+ * mechanism/implementation, if the semantics were specified
+ * differently due to, for eg: convention, architectural or efficiency
+ * reasons.
+ */
+#define addMRU(/* struct arc_list* */_arc_list,                                
        \
+              /* struct arc_item* */_arc_item) do {                            
\
+               assert(_arc_list->qcount < ARCLEN);                             
\
+               struct arc_item *aitmp; aitmp = allocmember(_arc_list);         
\
+               assert(aitmp != NULL);                                          
\
+               *aitmp = *_arc_item;                                            
\
+               TAILQ_INSERT_TAIL(&_arc_list->qhead, aitmp, qlink);             
\
+               _arc_list->qcount++;                                            
\
+       } while (/*CONSTCOND*/0)
+
+#define readLRU(/* struct arc_list* */_arc_list,                               
\
+               /* struct arc_item* */_arc_item) do {                           
\
+               assert(!TAILQ_EMPTY(&_arc_list->qhead));                        
\
+               assert(_arc_item != NULL);                                      
\
+               *_arc_item = *(struct arc_item 
*)TAILQ_FIRST(&_arc_list->qhead);\
+       } while (/*CONSTCOND*/0)
+               
+#define delLRU(/* struct arc_list* */_arc_list)                                
        \
+       if (!TAILQ_EMPTY(&_arc_list->qhead)) {                                  
\
+               struct arc_item *aitmp; aitmp = TAILQ_FIRST(&_arc_list->qhead); 
\
+               TAILQ_REMOVE(&_arc_list->qhead, aitmp, qlink);                  
\
+               freemember(aitmp);                                              
\
+               _arc_list->qcount--; assert(_arc_list->qcount >= 0);            
\
+       } else assert(false)
+
+#define delitem(/* struct arc_list* */_arc_list,                               
\
+               /* struct arc_item* */_arc_item) do {                           
\
+       struct arc_item *aitmp;                                                 
\
+       aitmp = findmember(_arc_list, _arc_item);                               
\
+       if (aitmp != TAILQ_END(&_arc_list->qhead)) {                            
\
+               TAILQ_REMOVE(&_arc_list->qhead, aitmp, qlink);                  
\
+               freemember(aitmp);                                              
\
+               _arc_list->qcount--; assert(_arc_list->qcount >= 0);            
\
+       }                                                                       
\
+       } while (/*CONSTCOND*/0)
+
+#define cachefetch(/* struct arc_item* */_arc_item) do {                       
\
+               _arc_item->cached = true; /* XXX:TODO */                        
\
+       } while (/*CONSTCOND*/0)
+
+#define cacheremove(/* struct arc_item* */_arc_item)  do {                     
\
+               _arc_item->cached = false;      /* XXX:TODO */                  
\
+       } while (/*CONSTCOND*/0)
+
+#define min(a, b) ((a < b) ? a : b)
+#define max(a, b) ((a > b) ? a : b)
+       
+/* These routines deal with our home-rolled mem management for the
+ * ghost cache directory memory embedded within statically defined
+ * struct arc_list buffers.
+ * Note that any pointers emerging from these should be treated as
+ * "opaque"/cookies - ie; they should not be assumed by other routines
+ * to have any specific properties (such as being part of any specific
+ * array etc.) They are solely for the consumption of these
+ * routines. Their contents however may be freely copied/written to.
+ */
+inline static struct arc_item *
+allocmember(struct arc_list *_arc_list)
+{
+       /* Search for the first unallocated item in given list */
+       struct arc_item *aitmp = NULL;
+       int i;
+       for (i = 0; i < ARCLEN; i++) {
+               if (_arc_list->item_list[i].iid == IID_INVAL) {
+                       assert(_arc_list->item_list[i].cached == false);
+                       aitmp = &_arc_list->item_list[i];
+               }
+       }
+       return aitmp;
+}
+       
+inline static void
+freemember(struct arc_item *aip)
+{
+       assert(aip != NULL);
+       init_arc_item(aip, IID_INVAL, false);
+}      
+
+static inline struct arc_item *
+findmember(struct arc_list *_arc_list, struct arc_item *aikey)
+{
+       assert(_arc_list != NULL && aikey != NULL);
+       assert(aikey->iid != IID_INVAL);
+       struct arc_item *aitmp;
+       TAILQ_FOREACH(aitmp, &_arc_list->qhead, qlink) {
+                       if (aitmp->iid == aikey->iid) {
+                               return aitmp;
+                       }
+       }
+       return aitmp; /* returns TAILQ_END() on non-membership */
+}
+
+void ARC(struct arc_item * /* x_t */);
+void arc_init(void);
+
+#endif /* _ARC_H_ */
diff -urN arc.null/arc_queue/arc.pmh arc/arc_queue/arc.pmh
--- arc.null/arc_queue/arc.pmh  1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc.pmh       2023-09-14 08:04:48.032313330 +0000
@@ -0,0 +1,42 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by 
cherry */
+
+#ifndef _ARC_INC
+#define _ARC_INC
+
+#define C 64 /* Cache size - use judiciously - adds to statespace */
+
+#define ARCLEN (2 * C)
+
+#define IID_INVAL -1
+
+typedef arc_item {
+       int iid;    /* Unique identifier for item */
+       bool cached;
+};
+
+/* Note that we use the arc_item.iid as the member lookup handle to reduce 
state space */
+typedef arc_list {
+       chan item_list  = [ ARCLEN ] of { int }; /* A list of page items */
+};
+
+
+#define init_arc_item(_arc_item, _iid, _cached)                \
+       {                                       \
+               _arc_item.iid = _iid;                   \
+               _arc_item.cached = _cached;             \
+       }
+
+#define lengthof(_arc_list) len(_arc_list.item_list)
+#define memberof(_arc_list, _arc_item) 
_arc_list.item_list??[eval(_arc_item.iid)]
+#define addMRU(_arc_list, _arc_item) _arc_list.item_list!_arc_item.iid
+#define readLRU(_arc_list, _arc_item) _arc_list.item_list?<_arc_item.iid>
+#define delLRU(_arc_list) _arc_list.item_list?_
+#define delitem(_arc_list, _arc_item) if :: lengthof(_arc_list) > 0; 
_arc_list.item_list??eval(_arc_item.iid) :: else; skip; fi
+
+#define cachefetch(_arc_item) _arc_item.cached = true
+#define cacheremove(_arc_item) _arc_item.cached = false
+
+#define min(a, b) ((a < b) -> a : b)
+#define max(a, b) ((a > b) -> a : b)
+       
+#endif /* _ARC_INC_ */
\ No newline at end of file
diff -urN arc.null/arc_queue/arc_queue.c arc/arc_queue/arc_queue.c
--- arc.null/arc_queue/arc_queue.c      1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.c   2023-09-11 11:38:49.468321594 +0000
@@ -0,0 +1,26 @@
+/* Mostly to pull in macros into functions, so that modex can parse them */
+
+#include "arc.h"
+
+void arc_addMRU(struct arc_list *Q,
+               struct arc_item *I)
+{
+       addMRU(Q, I);
+}
+
+void arc_readLRU(struct arc_list *Q,
+                struct arc_item *I)
+{
+       readLRU(Q, I);
+}
+
+void arc_delLRU(struct arc_list *Q)
+{
+       delLRU(Q);
+}
+
+void arc_delitem(struct arc_list *Q,
+                struct arc_item *I)
+{
+       delitem(Q, I);
+}
diff -urN arc.null/arc_queue/arc_queue.drv arc/arc_queue/arc_queue.drv
--- arc.null/arc_queue/arc_queue.drv    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.drv 2023-09-14 14:09:51.189956593 +0000
@@ -0,0 +1,43 @@
+/* Drive the procs */
+
+arc_item _x;
+
+init {
+
+       atomic { /* Load up Q */
+               I.iid = 0;
+               do
+               :: I.iid < ARCLEN ->
+                         p_arc_addMRU( /* Q, I */ );
+                          I.iid++;
+               :: I.iid >= ARCLEN ->
+                       break;
+               od
+       }
+
+       _x.iid = ARCLEN;
+
+       atomic { /* Read and remove from head */
+              do
+              :: _x.iid > (ARCLEN/2) ->
+                       _x.iid--;
+                       p_arc_readLRU( /* Q, I */ );
+                       assert(I.iid == (ARCLEN - (_x.iid + 1)));
+                       p_arc_delLRU( /* Q */);
+              :: _x.iid <= (ARCLEN/2) ->
+                       break;
+              od
+       }
+
+       atomic { /* Remove from tail */
+              do
+              :: _x.iid >= 0 -> /* delitem() semantics allow attempt on empty 
list */
+                       _x.iid--;
+                       I.iid = _x.iid + ARCLEN/2;
+                       p_arc_delitem( /* Q, I */);
+              :: _x.iid < 0 ->
+                       break;
+              od
+       }
+
+}
diff -urN arc.null/arc_queue/arc_queue_drv.c arc/arc_queue/arc_queue_drv.c
--- arc.null/arc_queue/arc_queue_drv.c  1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue_drv.c       2023-09-13 10:04:05.819212718 +0000
@@ -0,0 +1,52 @@
+#include "arc.h"
+#include "arc_queue.h"
+
+#include <stdio.h>
+
+static void arc_list_init(struct arc_list *_arc_list)
+{
+       TAILQ_INIT(&_arc_list->qhead);
+       _arc_list->qcount = 0;
+       
+       int i;
+       for(i = 0; i < ARCLEN; i++) {
+               init_arc_item(&_arc_list->item_list[i], IID_INVAL, false);
+       };
+}
+
+void main(void)
+{
+       struct arc_list Q;
+       struct arc_item I, _x;
+
+       arc_list_init(&Q);
+
+       I.iid = 0;
+
+       do {
+               printf("addMRU(): I.iid == %d\n", I.iid);
+               arc_addMRU(&Q, &I);
+               I.iid++;
+       } while(I.iid < ARCLEN);
+
+       _x.iid = ARCLEN;
+
+       do {
+               _x.iid--;
+               arc_readLRU(&Q, &I);
+               printf("readLRU(): I.iid == %d, _x.iid == %d\n", I.iid, 
_x.iid);                
+               assert(I.iid == (ARCLEN - (_x.iid + 1)));
+               arc_delLRU(&Q);
+       } while(_x.iid > (ARCLEN/2));
+
+
+       do { /* Remove from tail */
+               _x.iid--;
+               I.iid = _x.iid + ARCLEN/2;
+               arc_delitem( &Q, &I);
+               printf("delitem(): I.iid == %d, _x.iid == %d\n", I.iid, 
_x.iid);                                
+       } while(_x.iid >= 0); /* delitem() semantics allow attempt on empty 
list */
+
+}
+
+
diff -urN arc.null/arc_queue/arc_queue.h arc/arc_queue/arc_queue.h
--- arc.null/arc_queue/arc_queue.h      1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.h   2023-09-11 09:23:01.999474035 +0000
@@ -0,0 +1,16 @@
+#ifndef _ARC_QUEUE_H_
+#define _ARC_QUEUE_H_
+
+void arc_lengthof(struct arc_list *);
+
+void arc_memberof(struct arc_list *, struct arc_item *);
+
+void arc_addMRU(struct arc_list *, struct arc_item *);
+
+void arc_readLRU(struct arc_list *, struct arc_item *);
+
+void arc_delLRU(struct arc_list *);
+
+void arc_delitem(struct arc_list *, struct arc_item *);
+
+#endif /* _ARC_QUEUE_H_ */
diff -urN arc.null/arc_queue/arc_queue.inv arc/arc_queue/arc_queue.inv
--- arc.null/arc_queue/arc_queue.inv    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.inv 2023-09-14 08:51:59.057339095 +0000
@@ -0,0 +1,17 @@
+/* These are Linear Temporal Logic invariants (and constraints)
+ * applied over the statespace created by the promela
+ * specification. Correctness is implied by Logical consistency.
+ */
+ltl 
+{
+       /* Liveness - all threads, except control must finally exit */
+       eventually always (_nr_pr == 1) && 
+
+       eventually (len(Q.item_list) == ARCLEN) && /* We fill up Q first */
+
+       eventually always (len(Q.item_list) == 0) && /* We drain the Q in the 
end */
+       
+       true
+       
+
+}
\ No newline at end of file
diff -urN arc.null/arc_queue/arc_queue.pmh arc/arc_queue/arc_queue.pmh
--- arc.null/arc_queue/arc_queue.pmh    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.pmh 2023-09-13 08:02:28.647475795 +0000
@@ -0,0 +1,23 @@
+#define C 64
+
+#define ARCLEN (2 * C)
+
+#define IID_INVAL -1
+
+typedef arc_item {
+       int iid;
+}      
+
+/* Note that we use the arc_item.iid as the member lookup handle to reduce 
state space */
+typedef arc_list {
+       chan item_list  = [ ARCLEN ] of { int }; /* A list of page items */
+};
+
+#define TAILQ_INSERT_TAIL(_qh, _var, _ent) _qh ! _var.iid
+#define TAILQ_EMPTY(_qh) (len(_qh) == 0)
+#define TAILQ_REMOVE(_qh, _var, _ent) _qh ?? eval(_var.iid)
+#define TAILQ_FIRST(_qh, _var) _qh ? <_var.iid>
+#define TAILQ_END(_qh) IID_INVAL
+#define allocmember(_arc_list, _aitmp) skip; _aitmp.iid = IID_INVAL
+#define freemember(_arc_item) _arc_item.iid = IID_INVAL
+#define findmember(_arc_list, _arc_item) (TAILQ_EMPTY(_arc_list.item_list) -> 
TAILQ_END(_arc_list.item_list) : (_arc_list.item_list ?? [eval(_arc_item.iid)] 
-> _arc_item.iid : IID_INVAL))
diff -urN arc.null/arc_queue/arc_queue.pml arc/arc_queue/arc_queue.pml
--- arc.null/arc_queue/arc_queue.pml    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.pml 2023-09-14 07:01:26.549121004 +0000
@@ -0,0 +1,29 @@
+/* This model fronts the  "equivalance" model which is exported.
+ * The idea here is to drive it identically in such a way that
+ * invariants hold from both the extracted, as well as the
+ * hand-crafted model.
+ */
+
+int qcount;
+arc_item I;
+arc_list Q;
+
+inline p_arc_delitem()
+{
+       delitem(Q, I);
+}
+
+inline p_arc_delLRU()
+{
+       delLRU(Q);
+}
+
+inline p_arc_readLRU()
+{
+       readLRU(Q, I);
+}
+
+inline p_arc_addMRU()
+{
+       addMRU(Q, I);
+}
\ No newline at end of file
diff -urN arc.null/arc_queue/arc_queue.prx arc/arc_queue/arc_queue.prx
--- arc.null/arc_queue/arc_queue.prx    1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.prx 2023-09-13 07:51:32.144705226 +0000
@@ -0,0 +1,51 @@
+// Spin model extractor harness written by cherry
+//
+%F arc_queue.c
+%X -i arc_addMRU
+%X -i arc_readLRU
+%X -i arc_delLRU
+%X -i arc_delitem
+%H
+// arc.h glue
+#define bool int
+#define false 0
+#define _SYS_QUEUE_H_ /* Don't expand queue.h macros, as we model them */
+
+%%
+//%C  // c_code {}
+//%%
+//%D // c_cdecl {}
+//%%
+%L
+// We use spin primitives and data objects.
+// See %P Below
+NonState       hidden  Q
+NonState       hidden  I
+NonState       hidden  aitmp
+
+
+Substitute             c_code [Q] { Q->qcount--; }      qcount--
+Substitute             c_code [Q] { Q->qcount++; }      qcount++
+Substitute             c_code [(Q->qcount>=0)] { ; }    assert(qcount >= 0)
+Substitute             c_code { aitmp=findmember(Q,I); }       
aitmp.iid=findmember(Q, I)
+Substitute             c_expr { (aitmp!=TAILQ_END((&(Q->qhead)))) }    
(aitmp.iid != TAILQ_END(Q.item_list))
+Substitute             c_code [Q] { TAILQ_REMOVE((&(Q->qhead)),aitmp,qlink); } 
TAILQ_REMOVE(Q.item_list, aitmp, _)
+Substitute             c_code { freemember(aitmp); }   freemember(aitmp)
+Substitute             c_expr { (!TAILQ_EMPTY((&(Q->qhead)))) }        
(!TAILQ_EMPTY(Q.item_list))
+Substitute             c_code [(!TAILQ_EMPTY((&(Q->qhead))))] { ; }    
assert((!TAILQ_EMPTY(Q.item_list)))
+Substitute             c_code [(I!=NULL)] { ; }        assert(I.iid != 
IID_INVAL)
+Substitute             c_code [Q && (struct arc_item 
*)TAILQ_FIRST((&(Q->qhead))) && I] { (*I)=(*((struct arc_item 
*)TAILQ_FIRST((&(Q->qhead))))); }   TAILQ_FIRST(Q.item_list, I)
+Substitute             c_code [(Q->qcount<(2*64))] { ; }       assert(qcount < 
ARCLEN)
+Substitute             c_code [(aitmp!=NULL)] { ; }    assert(aitmp.iid == 
IID_INVAL)
+Substitute             c_code [I && aitmp] { (*aitmp)=(*I); }  aitmp.iid = 
I.iid
+Substitute             c_code [Q] { 
TAILQ_INSERT_TAIL((&(Q->qhead)),aitmp,qlink); }    
TAILQ_INSERT_TAIL(Q.item_list, aitmp, _); aitmp.iid = IID_INVAL
+Substitute             c_code { aitmp=allocmember(Q); }        
allocmember(Q.item_list, aitmp)
+Substitute             c_code [Q] { aitmp=TAILQ_FIRST((&(Q->qhead))); }        
TAILQ_FIRST(Q.item_list, aitmp)
+%%
+
+%P
+int qcount;
+hidden arc_item aitmp;
+arc_item I;
+arc_list Q;
+%%
\ No newline at end of file
diff -urN arc.null/arc_queue/Makefile arc/arc_queue/Makefile
--- arc.null/arc_queue/Makefile 1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/Makefile      2023-09-14 14:07:27.103171180 +0000
@@ -0,0 +1,62 @@
+# Equivalence verification
+# We attempt to verify that the arc queue implementation in C is consistent 
with its model.
+# Note that the simplified model is in arc_queue/arc.pmh and the C
+# implementation equivalent model is in arc_queue/arc_queue.pmh
+#
+# Thus, spin-gen: uses arc.pmh as the model interface, whereas
+# modex-gen: uses arc_queue.pmh
+
+spin-gen: arc_queue.pml arc_queue.drv arc_queue.inv
+       cp arc_queue.pml model #mimic modex
+       cat arc.pmh model > spinmodel.pml;cat arc_queue.drv >> 
spinmodel.pml;cat arc_queue.inv >> spinmodel.pml;
+       spin -am spinmodel.pml
+
+spin-build: #Could be spin-gen or modex-gen
+       cc -DVECTORSZ=65536 -o pan pan.c
+
+all: spin-gen spin-build prog
+
+# Verification related targets.
+spin-run: spin-build
+       ./pan -a #Generate arc.pml.trail        on error
+
+# You run the trace only if the spin run above failed and created a trail
+spin-trace: spinmodel.pml.trail
+       spin -t spinmodel.pml -p -g #  -p (statements) -g (globals) -l (locals) 
-s (send) -r (recv)
+       ./pan -r spinmodel.pml.trail -g
+
+# Build the implementation
+prog: arc_queue.c arc.h
+       cc -g -o arc_queue arc_queue_drv.c arc_queue.c
+
+# Modex Extracts from C code to 'model' - see arc_queue.prx
+modex-gen: arc_queue.prx arc_queue.c
+       modex -v -w arc_queue.prx
+       cat arc_queue.pmh model > spinmodel.pml;cat arc_queue.drv >> 
spinmodel.pml;cat arc_queue.inv >> spinmodel.pml;
+       spin -a spinmodel.pml #Sanity check
+
+# Housekeeping
+modex-gen-clean:
+       rm -f spinmodel.pml # Our consolidated model file
+       rm -f _spin_nvr.tmp # Never claim file
+       rm -f model # modex generated intermediate "model" file
+       rm -f pan.* # Spin generated source files
+       rm -f _modex* # modex generated script files
+       rm -f  *.I *.M
+
+prog-clean:
+       rm -f arc_queue
+spin-run-clean:
+       rm -f spinmodel.pml.trail
+
+spin-build-clean:
+       rm -f pan
+
+spin-gen-clean:
+       rm -f spinmodel.pml # Our consolidated model file
+       rm -f _spin_nvr.tmp # Never claim file
+       rm -f model # Intermediate "model" file
+       rm -f pan.* # Spin generated source files
+
+clean: modex-gen-clean spin-gen-clean spin-build-clean spin-run-clean 
prog-clean
+       rm -f *~
diff -urN arc.null/arc_queue/queue.h arc/arc_queue/queue.h
--- arc.null/arc_queue/queue.h  1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/queue.h       2023-09-11 04:48:17.669520444 +0000
@@ -0,0 +1,655 @@
+/*     $NetBSD: queue.h,v 1.76 2021/01/16 23:51:51 chs Exp $   */
+
+/*
+ * Copyright (c) 1991, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *     @(#)queue.h     8.5 (Berkeley) 8/20/94
+ */
+
+#ifndef        _SYS_QUEUE_H_
+#define        _SYS_QUEUE_H_
+
+/*
+ * This file defines five types of data structures: singly-linked lists,
+ * lists, simple queues, tail queues, and circular queues.
+ *
+ * A singly-linked list is headed by a single forward pointer. The
+ * elements are singly linked for minimum space and pointer manipulation
+ * overhead at the expense of O(n) removal for arbitrary elements. New
+ * elements can be added to the list after an existing element or at the
+ * head of the list.  Elements being removed from the head of the list
+ * should use the explicit macro for this purpose for optimum
+ * efficiency. A singly-linked list may only be traversed in the forward
+ * direction.  Singly-linked lists are ideal for applications with large
+ * datasets and few or no removals or for implementing a LIFO queue.
+ *
+ * A list is headed by a single forward pointer (or an array of forward
+ * pointers for a hash table header). The elements are doubly linked
+ * so that an arbitrary element can be removed without a need to
+ * traverse the list. New elements can be added to the list before
+ * or after an existing element or at the head of the list. A list
+ * may only be traversed in the forward direction.
+ *
+ * A simple queue is headed by a pair of pointers, one the head of the
+ * list and the other to the tail of the list. The elements are singly
+ * linked to save space, so elements can only be removed from the
+ * head of the list. New elements can be added to the list after
+ * an existing element, at the head of the list, or at the end of the
+ * list. A simple queue may only be traversed in the forward direction.
+ *
+ * A tail queue is headed by a pair of pointers, one to the head of the
+ * list and the other to the tail of the list. The elements are doubly
+ * linked so that an arbitrary element can be removed without a need to
+ * traverse the list. New elements can be added to the list before or
+ * after an existing element, at the head of the list, or at the end of
+ * the list. A tail queue may be traversed in either direction.
+ *
+ * For details on the use of these macros, see the queue(3) manual page.
+ */
+
+/*
+ * Include the definition of NULL only on NetBSD because sys/null.h
+ * is not available elsewhere.  This conditional makes the header
+ * portable and it can simply be dropped verbatim into any system.
+ * The caveat is that on other systems some other header
+ * must provide NULL before the macros can be used.
+ */
+#ifdef __NetBSD__
+#include <sys/null.h>
+#endif
+
+#if defined(_KERNEL) && defined(DIAGNOSTIC)
+#define QUEUEDEBUG     1
+#endif
+
+#if defined(QUEUEDEBUG)
+# if defined(_KERNEL)
+#  define QUEUEDEBUG_ABORT(...) panic(__VA_ARGS__)
+# else
+#  include <err.h>
+#  define QUEUEDEBUG_ABORT(...) err(1, __VA_ARGS__)
+# endif
+#endif
+
+/*
+ * Singly-linked List definitions.
+ */
+#define        SLIST_HEAD(name, type)                                          
\
+struct name {                                                          \
+       struct type *slh_first; /* first element */                     \
+}
+
+#define        SLIST_HEAD_INITIALIZER(head)                                    
\
+       { NULL }
+
+#define        SLIST_ENTRY(type)                                               
\
+struct {                                                               \
+       struct type *sle_next;  /* next element */                      \
+}
+
+/*
+ * Singly-linked List access methods.
+ */
+#define        SLIST_FIRST(head)       ((head)->slh_first)
+#define        SLIST_END(head)         NULL
+#define        SLIST_EMPTY(head)       ((head)->slh_first == NULL)
+#define        SLIST_NEXT(elm, field)  ((elm)->field.sle_next)
+
+#define        SLIST_FOREACH(var, head, field)                                 
\
+       for((var) = (head)->slh_first;                                  \
+           (var) != SLIST_END(head);                                   \
+           (var) = (var)->field.sle_next)
+
+#define        SLIST_FOREACH_SAFE(var, head, field, tvar)                      
\
+       for ((var) = SLIST_FIRST((head));                               \
+           (var) != SLIST_END(head) &&                                 \
+           ((tvar) = SLIST_NEXT((var), field), 1);                     \
+           (var) = (tvar))
+
+/*
+ * Singly-linked List functions.
+ */
+#define        SLIST_INIT(head) do {                                           
\
+       (head)->slh_first = SLIST_END(head);                            \
+} while (/*CONSTCOND*/0)
+
+#define        SLIST_INSERT_AFTER(slistelm, elm, field) do {                   
\
+       (elm)->field.sle_next = (slistelm)->field.sle_next;             \
+       (slistelm)->field.sle_next = (elm);                             \
+} while (/*CONSTCOND*/0)
+
+#define        SLIST_INSERT_HEAD(head, elm, field) do {                        
\
+       (elm)->field.sle_next = (head)->slh_first;                      \
+       (head)->slh_first = (elm);                                      \
+} while (/*CONSTCOND*/0)
+
+#define        SLIST_REMOVE_AFTER(slistelm, field) do {                        
\
+       (slistelm)->field.sle_next =                                    \
+           SLIST_NEXT(SLIST_NEXT((slistelm), field), field);           \
+} while (/*CONSTCOND*/0)
+
+#define        SLIST_REMOVE_HEAD(head, field) do {                             
\
+       (head)->slh_first = (head)->slh_first->field.sle_next;          \
+} while (/*CONSTCOND*/0)
+
+#define        SLIST_REMOVE(head, elm, type, field) do {                       
\
+       if ((head)->slh_first == (elm)) {                               \
+               SLIST_REMOVE_HEAD((head), field);                       \
+       }                                                               \
+       else {                                                          \
+               struct type *curelm = (head)->slh_first;                \
+               while(curelm->field.sle_next != (elm))                  \
+                       curelm = curelm->field.sle_next;                \
+               curelm->field.sle_next =                                \
+                   curelm->field.sle_next->field.sle_next;             \
+       }                                                               \
+} while (/*CONSTCOND*/0)
+
+
+/*
+ * List definitions.
+ */
+#define        LIST_HEAD(name, type)                                           
\
+struct name {                                                          \
+       struct type *lh_first;  /* first element */                     \
+}
+
+#define        LIST_HEAD_INITIALIZER(head)                                     
\
+       { NULL }
+
+#define        LIST_ENTRY(type)                                                
\
+struct {                                                               \
+       struct type *le_next;   /* next element */                      \
+       struct type **le_prev;  /* address of previous next element */  \
+}
+
+/*
+ * List access methods.
+ */
+#define        LIST_FIRST(head)                ((head)->lh_first)
+#define        LIST_END(head)                  NULL
+#define        LIST_EMPTY(head)                ((head)->lh_first == 
LIST_END(head))
+#define        LIST_NEXT(elm, field)           ((elm)->field.le_next)
+
+#define        LIST_FOREACH(var, head, field)                                  
\
+       for ((var) = ((head)->lh_first);                                \
+           (var) != LIST_END(head);                                    \
+           (var) = ((var)->field.le_next))
+
+#define        LIST_FOREACH_SAFE(var, head, field, tvar)                       
\
+       for ((var) = LIST_FIRST((head));                                \
+           (var) != LIST_END(head) &&                                  \
+           ((tvar) = LIST_NEXT((var), field), 1);                      \
+           (var) = (tvar))
+
+#define        LIST_MOVE(head1, head2, field) do {                             
\
+       LIST_INIT((head2));                                             \
+       if (!LIST_EMPTY((head1))) {                                     \
+               (head2)->lh_first = (head1)->lh_first;                  \
+               (head2)->lh_first->field.le_prev = &(head2)->lh_first;  \
+               LIST_INIT((head1));                                     \
+       }                                                               \
+} while (/*CONSTCOND*/0)
+
+/*
+ * List functions.
+ */
+#if defined(QUEUEDEBUG)
+#define        QUEUEDEBUG_LIST_INSERT_HEAD(head, elm, field)                   
\
+       if ((head)->lh_first &&                                         \
+           (head)->lh_first->field.le_prev != &(head)->lh_first)       \
+               QUEUEDEBUG_ABORT("LIST_INSERT_HEAD %p %s:%d", (head),   \
+                   __FILE__, __LINE__);
+#define        QUEUEDEBUG_LIST_OP(elm, field)                                  
\
+       if ((elm)->field.le_next &&                                     \
+           (elm)->field.le_next->field.le_prev !=                      \
+           &(elm)->field.le_next)                                      \
+               QUEUEDEBUG_ABORT("LIST_* forw %p %s:%d", (elm),         \
+                   __FILE__, __LINE__);                                \
+       if (*(elm)->field.le_prev != (elm))                             \
+               QUEUEDEBUG_ABORT("LIST_* back %p %s:%d", (elm),         \
+                   __FILE__, __LINE__);
+#define        QUEUEDEBUG_LIST_POSTREMOVE(elm, field)                          
\
+       (elm)->field.le_next = (void *)1L;                              \
+       (elm)->field.le_prev = (void *)1L;
+#else
+#define        QUEUEDEBUG_LIST_INSERT_HEAD(head, elm, field)
+#define        QUEUEDEBUG_LIST_OP(elm, field)
+#define        QUEUEDEBUG_LIST_POSTREMOVE(elm, field)
+#endif
+
+#define        LIST_INIT(head) do {                                            
\
+       (head)->lh_first = LIST_END(head);                              \
+} while (/*CONSTCOND*/0)
+
+#define        LIST_INSERT_AFTER(listelm, elm, field) do {                     
\
+       QUEUEDEBUG_LIST_OP((listelm), field)                            \
+       if (((elm)->field.le_next = (listelm)->field.le_next) !=        \
+           LIST_END(head))                                             \
+               (listelm)->field.le_next->field.le_prev =               \
+                   &(elm)->field.le_next;                              \
+       (listelm)->field.le_next = (elm);                               \
+       (elm)->field.le_prev = &(listelm)->field.le_next;               \
+} while (/*CONSTCOND*/0)
+
+#define        LIST_INSERT_BEFORE(listelm, elm, field) do {                    
\
+       QUEUEDEBUG_LIST_OP((listelm), field)                            \
+       (elm)->field.le_prev = (listelm)->field.le_prev;                \
+       (elm)->field.le_next = (listelm);                               \
+       *(listelm)->field.le_prev = (elm);                              \
+       (listelm)->field.le_prev = &(elm)->field.le_next;               \
+} while (/*CONSTCOND*/0)
+
+#define        LIST_INSERT_HEAD(head, elm, field) do {                         
\
+       QUEUEDEBUG_LIST_INSERT_HEAD((head), (elm), field)               \
+       if (((elm)->field.le_next = (head)->lh_first) != LIST_END(head))\
+               (head)->lh_first->field.le_prev = &(elm)->field.le_next;\
+       (head)->lh_first = (elm);                                       \
+       (elm)->field.le_prev = &(head)->lh_first;                       \
+} while (/*CONSTCOND*/0)
+
+#define        LIST_REMOVE(elm, field) do {                                    
\
+       QUEUEDEBUG_LIST_OP((elm), field)                                \
+       if ((elm)->field.le_next != NULL)                               \
+               (elm)->field.le_next->field.le_prev =                   \
+                   (elm)->field.le_prev;                               \
+       *(elm)->field.le_prev = (elm)->field.le_next;                   \
+       QUEUEDEBUG_LIST_POSTREMOVE((elm), field)                        \
+} while (/*CONSTCOND*/0)
+
+#define LIST_REPLACE(elm, elm2, field) do {                            \
+       if (((elm2)->field.le_next = (elm)->field.le_next) != NULL)     \
+               (elm2)->field.le_next->field.le_prev =                  \
+                   &(elm2)->field.le_next;                             \
+       (elm2)->field.le_prev = (elm)->field.le_prev;                   \
+       *(elm2)->field.le_prev = (elm2);                                \
+       QUEUEDEBUG_LIST_POSTREMOVE((elm), field)                        \
+} while (/*CONSTCOND*/0)
+
+/*
+ * Simple queue definitions.
+ */
+#define        SIMPLEQ_HEAD(name, type)                                        
\
+struct name {                                                          \
+       struct type *sqh_first; /* first element */                     \
+       struct type **sqh_last; /* addr of last next element */         \
+}
+
+#define        SIMPLEQ_HEAD_INITIALIZER(head)                                  
\
+       { NULL, &(head).sqh_first }
+
+#define        SIMPLEQ_ENTRY(type)                                             
\
+struct {                                                               \
+       struct type *sqe_next;  /* next element */                      \
+}
+
+/*
+ * Simple queue access methods.
+ */
+#define        SIMPLEQ_FIRST(head)             ((head)->sqh_first)
+#define        SIMPLEQ_END(head)               NULL
+#define        SIMPLEQ_EMPTY(head)             ((head)->sqh_first == 
SIMPLEQ_END(head))
+#define        SIMPLEQ_NEXT(elm, field)        ((elm)->field.sqe_next)
+
+#define        SIMPLEQ_FOREACH(var, head, field)                               
\
+       for ((var) = ((head)->sqh_first);                               \
+           (var) != SIMPLEQ_END(head);                                 \
+           (var) = ((var)->field.sqe_next))
+
+#define        SIMPLEQ_FOREACH_SAFE(var, head, field, next)                    
\
+       for ((var) = ((head)->sqh_first);                               \
+           (var) != SIMPLEQ_END(head) &&                               \
+           ((next = ((var)->field.sqe_next)), 1);                      \
+           (var) = (next))
+
+/*
+ * Simple queue functions.
+ */
+#define        SIMPLEQ_INIT(head) do {                                         
\
+       (head)->sqh_first = NULL;                                       \
+       (head)->sqh_last = &(head)->sqh_first;                          \
+} while (/*CONSTCOND*/0)
+
+#define        SIMPLEQ_INSERT_HEAD(head, elm, field) do {                      
\
+       if (((elm)->field.sqe_next = (head)->sqh_first) == NULL)        \
+               (head)->sqh_last = &(elm)->field.sqe_next;              \
+       (head)->sqh_first = (elm);                                      \
+} while (/*CONSTCOND*/0)
+
+#define        SIMPLEQ_INSERT_TAIL(head, elm, field) do {                      
\
+       (elm)->field.sqe_next = NULL;                                   \
+       *(head)->sqh_last = (elm);                                      \
+       (head)->sqh_last = &(elm)->field.sqe_next;                      \
+} while (/*CONSTCOND*/0)
+
+#define        SIMPLEQ_INSERT_AFTER(head, listelm, elm, field) do {            
\
+       if (((elm)->field.sqe_next = (listelm)->field.sqe_next) == NULL)\
+               (head)->sqh_last = &(elm)->field.sqe_next;              \
+       (listelm)->field.sqe_next = (elm);                              \
+} while (/*CONSTCOND*/0)
+
+#define        SIMPLEQ_REMOVE_HEAD(head, field) do {                           
\
+       if (((head)->sqh_first = (head)->sqh_first->field.sqe_next) == NULL) \
+               (head)->sqh_last = &(head)->sqh_first;                  \
+} while (/*CONSTCOND*/0)
+
+#define SIMPLEQ_REMOVE_AFTER(head, elm, field) do {                    \
+       if (((elm)->field.sqe_next = (elm)->field.sqe_next->field.sqe_next) \
+           == NULL)                                                    \
+               (head)->sqh_last = &(elm)->field.sqe_next;              \
+} while (/*CONSTCOND*/0)
+
+#define        SIMPLEQ_REMOVE(head, elm, type, field) do {                     
\
+       if ((head)->sqh_first == (elm)) {                               \
+               SIMPLEQ_REMOVE_HEAD((head), field);                     \
+       } else {                                                        \
+               struct type *curelm = (head)->sqh_first;                \
+               while (curelm->field.sqe_next != (elm))                 \
+                       curelm = curelm->field.sqe_next;                \
+               if ((curelm->field.sqe_next =                           \
+                       curelm->field.sqe_next->field.sqe_next) == NULL) \
+                           (head)->sqh_last = &(curelm)->field.sqe_next; \
+       }                                                               \
+} while (/*CONSTCOND*/0)
+
+#define        SIMPLEQ_CONCAT(head1, head2) do {                               
\
+       if (!SIMPLEQ_EMPTY((head2))) {                                  \
+               *(head1)->sqh_last = (head2)->sqh_first;                \
+               (head1)->sqh_last = (head2)->sqh_last;          \
+               SIMPLEQ_INIT((head2));                                  \
+       }                                                               \
+} while (/*CONSTCOND*/0)
+
+#define        SIMPLEQ_LAST(head, type, field)                                 
\
+       (SIMPLEQ_EMPTY((head)) ?                                                
\
+               NULL :                                                  \
+               ((struct type *)(void *)                                \
+               ((char *)((head)->sqh_last) - offsetof(struct type, field))))
+
+/*
+ * Tail queue definitions.
+ */
+#define        _TAILQ_HEAD(name, type, qual)                                   
\
+struct name {                                                          \
+       qual type *tqh_first;           /* first element */             \
+       qual type *qual *tqh_last;      /* addr of last next element */ \
+}
+#define TAILQ_HEAD(name, type) _TAILQ_HEAD(name, struct type,)
+
+#define        TAILQ_HEAD_INITIALIZER(head)                                    
\
+       { TAILQ_END(head), &(head).tqh_first }
+
+#define        _TAILQ_ENTRY(type, qual)                                        
\
+struct {                                                               \
+       qual type *tqe_next;            /* next element */              \
+       qual type *qual *tqe_prev;      /* address of previous next element */\
+}
+#define TAILQ_ENTRY(type)      _TAILQ_ENTRY(struct type,)
+
+/*
+ * Tail queue access methods.
+ */
+#define        TAILQ_FIRST(head)               ((head)->tqh_first)
+#define        TAILQ_END(head)                 (NULL)
+#define        TAILQ_NEXT(elm, field)          ((elm)->field.tqe_next)
+#define        TAILQ_LAST(head, headname) \
+       (*(((struct headname *)(void *)((head)->tqh_last))->tqh_last))
+#define        TAILQ_PREV(elm, headname, field) \
+       (*(((struct headname *)(void *)((elm)->field.tqe_prev))->tqh_last))
+#define        TAILQ_EMPTY(head)               (TAILQ_FIRST(head) == 
TAILQ_END(head))
+
+
+#define        TAILQ_FOREACH(var, head, field)                                 
\
+       for ((var) = ((head)->tqh_first);                               \
+           (var) != TAILQ_END(head);                                   \
+           (var) = ((var)->field.tqe_next))
+
+#define        TAILQ_FOREACH_SAFE(var, head, field, next)                      
\
+       for ((var) = ((head)->tqh_first);                               \
+           (var) != TAILQ_END(head) &&                                 \
+           ((next) = TAILQ_NEXT(var, field), 1); (var) = (next))
+
+#define        TAILQ_FOREACH_REVERSE(var, head, headname, field)               
\
+       for ((var) = TAILQ_LAST((head), headname);                      \
+           (var) != TAILQ_END(head);                                   \
+           (var) = TAILQ_PREV((var), headname, field))
+
+#define        TAILQ_FOREACH_REVERSE_SAFE(var, head, headname, field, prev)    
\
+       for ((var) = TAILQ_LAST((head), headname);                      \
+           (var) != TAILQ_END(head) &&                                 \
+           ((prev) = TAILQ_PREV((var), headname, field), 1); (var) = (prev))
+
+/*
+ * Tail queue functions.
+ */
+#if defined(QUEUEDEBUG)
+#define        QUEUEDEBUG_TAILQ_INSERT_HEAD(head, elm, field)                  
\
+       if ((head)->tqh_first &&                                        \
+           (head)->tqh_first->field.tqe_prev != &(head)->tqh_first)    \
+               QUEUEDEBUG_ABORT("TAILQ_INSERT_HEAD %p %s:%d", (head),  \
+                   __FILE__, __LINE__);
+#define        QUEUEDEBUG_TAILQ_INSERT_TAIL(head, elm, field)                  
\
+       if (*(head)->tqh_last != NULL)                                  \
+               QUEUEDEBUG_ABORT("TAILQ_INSERT_TAIL %p %s:%d", (head),  \
+                   __FILE__, __LINE__);
+#define        QUEUEDEBUG_TAILQ_OP(elm, field)                                 
\
+       if ((elm)->field.tqe_next &&                                    \
+           (elm)->field.tqe_next->field.tqe_prev !=                    \
+           &(elm)->field.tqe_next)                                     \
+               QUEUEDEBUG_ABORT("TAILQ_* forw %p %s:%d", (elm),        \
+                   __FILE__, __LINE__);                                \
+       if (*(elm)->field.tqe_prev != (elm))                            \
+               QUEUEDEBUG_ABORT("TAILQ_* back %p %s:%d", (elm),        \
+                   __FILE__, __LINE__);
+#define        QUEUEDEBUG_TAILQ_PREREMOVE(head, elm, field)                    
\
+       if ((elm)->field.tqe_next == NULL &&                            \
+           (head)->tqh_last != &(elm)->field.tqe_next)                 \
+               QUEUEDEBUG_ABORT("TAILQ_PREREMOVE head %p elm %p %s:%d",\
+                   (head), (elm), __FILE__, __LINE__);
+#define        QUEUEDEBUG_TAILQ_POSTREMOVE(elm, field)                         
\
+       (elm)->field.tqe_next = (void *)1L;                             \
+       (elm)->field.tqe_prev = (void *)1L;
+#else
+#define        QUEUEDEBUG_TAILQ_INSERT_HEAD(head, elm, field)
+#define        QUEUEDEBUG_TAILQ_INSERT_TAIL(head, elm, field)
+#define        QUEUEDEBUG_TAILQ_OP(elm, field)
+#define        QUEUEDEBUG_TAILQ_PREREMOVE(head, elm, field)
+#define        QUEUEDEBUG_TAILQ_POSTREMOVE(elm, field)
+#endif
+
+#define        TAILQ_INIT(head) do {                                           
\
+       (head)->tqh_first = TAILQ_END(head);                            \
+       (head)->tqh_last = &(head)->tqh_first;                          \
+} while (/*CONSTCOND*/0)
+
+#define        TAILQ_INSERT_HEAD(head, elm, field) do {                        
\
+       QUEUEDEBUG_TAILQ_INSERT_HEAD((head), (elm), field)              \
+       if (((elm)->field.tqe_next = (head)->tqh_first) != TAILQ_END(head))\
+               (head)->tqh_first->field.tqe_prev =                     \
+                   &(elm)->field.tqe_next;                             \
+       else                                                            \
+               (head)->tqh_last = &(elm)->field.tqe_next;              \
+       (head)->tqh_first = (elm);                                      \
+       (elm)->field.tqe_prev = &(head)->tqh_first;                     \
+} while (/*CONSTCOND*/0)
+
+#define        TAILQ_INSERT_TAIL(head, elm, field) do {                        
\
+       QUEUEDEBUG_TAILQ_INSERT_TAIL((head), (elm), field)              \
+       (elm)->field.tqe_next = TAILQ_END(head);                        \
+       (elm)->field.tqe_prev = (head)->tqh_last;                       \
+       *(head)->tqh_last = (elm);                                      \
+       (head)->tqh_last = &(elm)->field.tqe_next;                      \
+} while (/*CONSTCOND*/0)
+
+#define        TAILQ_INSERT_AFTER(head, listelm, elm, field) do {              
\
+       QUEUEDEBUG_TAILQ_OP((listelm), field)                           \
+       if (((elm)->field.tqe_next = (listelm)->field.tqe_next) !=      \
+           TAILQ_END(head))                                            \
+               (elm)->field.tqe_next->field.tqe_prev =                 \
+                   &(elm)->field.tqe_next;                             \
+       else                                                            \
+               (head)->tqh_last = &(elm)->field.tqe_next;              \
+       (listelm)->field.tqe_next = (elm);                              \
+       (elm)->field.tqe_prev = &(listelm)->field.tqe_next;             \
+} while (/*CONSTCOND*/0)
+
+#define        TAILQ_INSERT_BEFORE(listelm, elm, field) do {                   
\
+       QUEUEDEBUG_TAILQ_OP((listelm), field)                           \
+       (elm)->field.tqe_prev = (listelm)->field.tqe_prev;              \
+       (elm)->field.tqe_next = (listelm);                              \
+       *(listelm)->field.tqe_prev = (elm);                             \
+       (listelm)->field.tqe_prev = &(elm)->field.tqe_next;             \
+} while (/*CONSTCOND*/0)
+
+#define        TAILQ_REMOVE(head, elm, field) do {                             
\
+       QUEUEDEBUG_TAILQ_PREREMOVE((head), (elm), field)                \
+       QUEUEDEBUG_TAILQ_OP((elm), field)                               \
+       if (((elm)->field.tqe_next) != TAILQ_END(head))                 \
+               (elm)->field.tqe_next->field.tqe_prev =                 \
+                   (elm)->field.tqe_prev;                              \
+       else                                                            \
+               (head)->tqh_last = (elm)->field.tqe_prev;               \
+       *(elm)->field.tqe_prev = (elm)->field.tqe_next;                 \
+       QUEUEDEBUG_TAILQ_POSTREMOVE((elm), field);                      \
+} while (/*CONSTCOND*/0)
+
+#define TAILQ_REPLACE(head, elm, elm2, field) do {                     \
+        if (((elm2)->field.tqe_next = (elm)->field.tqe_next) !=        \
+           TAILQ_END(head))                                            \
+                (elm2)->field.tqe_next->field.tqe_prev =               \
+                    &(elm2)->field.tqe_next;                           \
+        else                                                           \
+                (head)->tqh_last = &(elm2)->field.tqe_next;            \
+        (elm2)->field.tqe_prev = (elm)->field.tqe_prev;                        
\
+        *(elm2)->field.tqe_prev = (elm2);                              \
+       QUEUEDEBUG_TAILQ_POSTREMOVE((elm), field);                      \
+} while (/*CONSTCOND*/0)
+
+#define        TAILQ_CONCAT(head1, head2, field) do {                          
\
+       if (!TAILQ_EMPTY(head2)) {                                      \
+               *(head1)->tqh_last = (head2)->tqh_first;                \
+               (head2)->tqh_first->field.tqe_prev = (head1)->tqh_last; \
+               (head1)->tqh_last = (head2)->tqh_last;                  \
+               TAILQ_INIT((head2));                                    \
+       }                                                               \
+} while (/*CONSTCOND*/0)
+
+/*
+ * Singly-linked Tail queue declarations.
+ */
+#define        STAILQ_HEAD(name, type)                                         
\
+struct name {                                                          \
+       struct type *stqh_first;        /* first element */             \
+       struct type **stqh_last;        /* addr of last next element */ \
+}
+
+#define        STAILQ_HEAD_INITIALIZER(head)                                   
\
+       { NULL, &(head).stqh_first }
+
+#define        STAILQ_ENTRY(type)                                              
\
+struct {                                                               \
+       struct type *stqe_next; /* next element */                      \
+}
+
+/*
+ * Singly-linked Tail queue access methods.
+ */
+#define        STAILQ_FIRST(head)      ((head)->stqh_first)
+#define        STAILQ_END(head)        NULL
+#define        STAILQ_NEXT(elm, field) ((elm)->field.stqe_next)
+#define        STAILQ_EMPTY(head)      (STAILQ_FIRST(head) == STAILQ_END(head))
+
+/*
+ * Singly-linked Tail queue functions.
+ */
+#define        STAILQ_INIT(head) do {                                          
\
+       (head)->stqh_first = NULL;                                      \
+       (head)->stqh_last = &(head)->stqh_first;                                
\
+} while (/*CONSTCOND*/0)
+
+#define        STAILQ_INSERT_HEAD(head, elm, field) do {                       
\
+       if (((elm)->field.stqe_next = (head)->stqh_first) == NULL)      \
+               (head)->stqh_last = &(elm)->field.stqe_next;            \
+       (head)->stqh_first = (elm);                                     \
+} while (/*CONSTCOND*/0)
+
+#define        STAILQ_INSERT_TAIL(head, elm, field) do {                       
\
+       (elm)->field.stqe_next = NULL;                                  \
+       *(head)->stqh_last = (elm);                                     \
+       (head)->stqh_last = &(elm)->field.stqe_next;                    \
+} while (/*CONSTCOND*/0)
+
+#define        STAILQ_INSERT_AFTER(head, listelm, elm, field) do {             
\
+       if (((elm)->field.stqe_next = (listelm)->field.stqe_next) == NULL)\
+               (head)->stqh_last = &(elm)->field.stqe_next;            \
+       (listelm)->field.stqe_next = (elm);                             \
+} while (/*CONSTCOND*/0)
+
+#define        STAILQ_REMOVE_HEAD(head, field) do {                            
\
+       if (((head)->stqh_first = (head)->stqh_first->field.stqe_next) == NULL) 
\
+               (head)->stqh_last = &(head)->stqh_first;                        
\
+} while (/*CONSTCOND*/0)
+
+#define        STAILQ_REMOVE(head, elm, type, field) do {                      
\
+       if ((head)->stqh_first == (elm)) {                              \
+               STAILQ_REMOVE_HEAD((head), field);                      \
+       } else {                                                        \
+               struct type *curelm = (head)->stqh_first;               \
+               while (curelm->field.stqe_next != (elm))                        
\
+                       curelm = curelm->field.stqe_next;               \
+               if ((curelm->field.stqe_next =                          \
+                       curelm->field.stqe_next->field.stqe_next) == NULL) \
+                           (head)->stqh_last = &(curelm)->field.stqe_next; \
+       }                                                               \
+} while (/*CONSTCOND*/0)
+
+#define        STAILQ_FOREACH(var, head, field)                                
\
+       for ((var) = ((head)->stqh_first);                              \
+               (var);                                                  \
+               (var) = ((var)->field.stqe_next))
+
+#define        STAILQ_FOREACH_SAFE(var, head, field, tvar)                     
\
+       for ((var) = STAILQ_FIRST((head));                              \
+           (var) && ((tvar) = STAILQ_NEXT((var), field), 1);           \
+           (var) = (tvar))
+
+#define        STAILQ_CONCAT(head1, head2) do {                                
\
+       if (!STAILQ_EMPTY((head2))) {                                   \
+               *(head1)->stqh_last = (head2)->stqh_first;              \
+               (head1)->stqh_last = (head2)->stqh_last;                \
+               STAILQ_INIT((head2));                                   \
+       }                                                               \
+} while (/*CONSTCOND*/0)
+
+#define        STAILQ_LAST(head, type, field)                                  
\
+       (STAILQ_EMPTY((head)) ?                                         \
+               NULL :                                                  \
+               ((struct type *)(void *)                                \
+               ((char *)((head)->stqh_last) - offsetof(struct type, field))))
+
+#endif /* !_SYS_QUEUE_H_ */
diff -urN arc.null/Makefile arc/Makefile
--- arc.null/Makefile   1970-01-01 00:00:00.000000000 +0000
+++ arc/Makefile        2023-09-14 14:07:51.871667720 +0000
@@ -0,0 +1,92 @@
+# This set of spinroot related files were written by cherry
+# <c...@bow.st> in the Gregorian Calendar year AD.2023, in the month
+# of February that year.
+#
+# We have two specification files and a properties file (".inv")
+#
+# The properties file contains "constraint" sections
+# such as ltl or never claims (either or, not both).
+# The specification is divided into two files:
+# the file with suffix '.drv' is a "driver" which
+# instantiates processes that will ultimately "drive" the
+# models under test.
+# The file with the suffix '.pml' contains the process
+# model code, which, is intended to be the formal specification
+# for the code we are interested in writing in C.
+#
+# We process these files in slightly different ways during
+# the dev cycle, but broadly speaking, the idea is to create
+# a file called 'spinmodel.pml' which contains the final
+# model file that is fed to spin.
+#
+# Note that when we use the model extractor tool "modex" to
+# extract the 'specification' from C code written to implement
+# the model defined above. We use a 'harness' file (see file with
+# suffix '.prx' below.
+#
+# Once the harness has been run, spinmodel.pml should be
+# synthesised and processed as usual.
+# 
+# The broad idea is that software dev starts by writing the spec
+# first, validating the model, and then implementing the model in
+# C, after which we come back to extract the model from the C file
+# and cross check our implementation using spin.
+#
+# If things go well, the constraints specified in the '.inv' file
+# should hold exactly for both the handwritten model, and the
+# extracted one.
+
+spin-gen: arc.pml arc.drv arc.inv
+       cp arc.pml model #mimic modex
+       cat arc_queue/arc.pmh model > spinmodel.pml;cat arc.drv >> 
spinmodel.pml;cat arc.inv >> spinmodel.pml;
+       spin -am spinmodel.pml
+
+spin-build: #Could be spin-gen or modex-gen
+       cc -DVECTORSZ=65536 -o pan pan.c
+
+all: spin-gen spin-build prog
+
+# Verification related targets.
+spin-run: spin-build
+       ./pan -a #Generate arc.pml.trail        on error
+
+# You run the trace only if the spin run above failed and created a trail
+spin-trace: spinmodel.pml.trail
+       spin -t spinmodel.pml -p -g #  -p (statements) -g (globals) -l (locals) 
-s (send) -r (recv)
+       ./pan -r spinmodel.pml.trail -g
+
+# Build the implementation
+prog: arc.c arc_queue/arc.h
+       cc -g -o arc arc_drv.c arc.c
+
+# Modex Extracts from C code to 'model' - see arc.prx
+modex-gen: arc.prx arc.c
+       modex -v -w arc.prx
+       cat arc_queue/arc.pmh model > spinmodel.pml;cat arc.drv >> 
spinmodel.pml;cat arc.inv >> spinmodel.pml;
+       spin -a spinmodel.pml #Sanity check
+
+# Housekeeping
+modex-gen-clean:
+       rm -f spinmodel.pml # Our consolidated model file
+       rm -f _spin_nvr.tmp # Never claim file
+       rm -f model # modex generated intermediate "model" file
+       rm -f pan.* # Spin generated source files
+       rm -f _modex* # modex generated script files
+       rm -f  *.I *.M
+
+prog-clean:
+       rm -f arc
+spin-run-clean:
+       rm -f spinmodel.pml.trail
+
+spin-build-clean:
+       rm -f pan
+
+spin-gen-clean:
+       rm -f spinmodel.pml # Our consolidated model file
+       rm -f _spin_nvr.tmp # Never claim file
+       rm -f model # Intermediate "model" file
+       rm -f pan.* # Spin generated source files
+
+clean: modex-gen-clean spin-gen-clean spin-build-clean spin-run-clean 
prog-clean
+       rm -f *~

Reply via email to