I'm currently writing a spec for asynchronous processing of blockchain events 
using Nim channels. This might be interesting for you: 
[https://github.com/mratsim/blocksmith/blob/master/quarantine.md](https://github.com/mratsim/blocksmith/blob/master/quarantine.md)

Zoom is on the quarantine service which accepts blocks from the network and 
enqueue verifications in other workers, drop them if invalid or enqueue them in 
a DB if valid.

Some explanation
    
    
    type
      Quarantine* = ptr object
        ## Quarantine service
        inNetworkBlocks: ptr Channel[QuarantinedBlock]             # In from 
network and when calling "resolve" on the quarantineDB
        inNetworkAttestations: ptr Channel[QuarantinedAttestation] # In from 
network and when calling "resolve" on the quarantineDB
        quarantineDB: quarantineDB
        slashingDetector: SlashingDetectionAndProtection
        rewinder: Rewinder
        outSlashableBlocks: ptr Channel[QuarantinedBlock]
        outClearedBlocks: ptr Channel[ClearedBlock]
        outClearedAttestations: ptr Channel[ClearedAttestation]
        shutdown: Atomic[bool]
        
        # Internal result channels
        areBlocksCleared: seq[tuple[blck: QuarantinedBlock, chan: ptr 
Channel[bool], free: bool]]
        areAttestationsCleared: seq[tuple[att: QuarantinedAttestation, chan: 
ptr Channel[bool], free: bool]]
        
        logFile: string
        logLevel: LogLevel
    
    
    Run

Other workers/services have the addresses of your input channels, they enqueue 
asynchronously tasks in them for you to process.

You run an event loop that dispatch the incoming tasks either to a worker or 
process them in the current context. When you dispatch to another worker, you 
need to send a result channel to receive results. Once you emptied your 
incoming queue, you loop on the result channels with `tryRecv` until either you 
received everything or only pending/incomplete tasks are left. And you restart.

Some platforms support nanosecond sleep. For IO operations asyncdispatch gives 
you abstractions to wait on those and properly sleep and be wakeup as soon as 
they are ready. You can do exponential backoff starting from 1ms up to 16ms for 
example to limit the latency loss.

More complex backoff strategies like log-log-iterated backoff are possible as 
well, see research on backoff strategies in Weave, 
[https://github.com/mratsim/weave/blob/943d04ae/weave/cross_thread_com/event_notifiers_and_backoff.md](https://github.com/mratsim/weave/blob/943d04ae/weave/cross_thread_com/event_notifiers_and_backoff.md)

Note that Weave core building blocks are channels (I have 3 kinds 
[https://github.com/mratsim/weave/tree/943d04ae/weave/cross_thread_com](https://github.com/mratsim/weave/tree/943d04ae/weave/cross_thread_com))
 and Future/Flowvar == Channels in Weave (though my own custom implementation), 
i.e. you can build an asynchronous application with just createThread and 
channels.

Reply via email to