Here i have added the snippet from my other thread, without making use of
it. This is lengthy and hence may be hard to continue working on. Its
length will reduce if I can integrate the parts after integrating the
algorithm.
import bisect
class Chunk:
def __init__(self, start, end, data, heigh
Here i have renamed Chunk.Entry to Region and it still runs
>
import bisect
class Chunk:
def __init__(self, start, end, data, height=0, leaf_count=1, age=0):
self.start = start
self.end = end
self.data = data
self.height = height
self.leaf_count = leaf_
This is what i'm thinking of using as a base so as to reuse work when
changing the structure to use index regions between writes. It makes large
and shallow indexes atm.
import bisect
class Chunk:
def __init__(self, start, end, data, height=0, leaf_count=1, age=0):
self.start = start
contains bugfixes regarding missed writes and branches when merging.
i started looking into append-only behavior (writing only to the last
offset) and performance seems much worse than other implementations (every
root is increasingly wide), maybe because i am rebalancing the tree by
simply attach
- I added expanding deep branches into every add, in an accumulating way
equivalent to recursion. this keeps the tree depth within new bounds when
the leaf count drops, and now some branches are merging, covering that code
- I made the test a little more pathological by shortening the random
writes
next step for me is likely journaling out what conditions would cover the
untested code, or what logic error is around having it there
from collections import namedtuple
import bisect
class Chunk:
def __init__(self, start, end, data, height=0, leaf_count=1, age=0):
self.start = start
stable first version:
https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102291.html
attached appears to have comparable performance to first version. i tried
to implement all the same approaches (did not check for certain), except
without all the O(n) simplifications and updating the index
last stable version at
> https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102291.html
>
> next step might be to change flushing so the index can be accumulated
>> during writes or mini flushes, so as to be able to limit the total size of
>> an index per flush if needed, without issue.
>>
>
>
> last stable version at
https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102291.html
next step might be to change flushing so the index can be accumulated
> during writes or mini flushes, so as to be able to limit the total size of
> an index per flush if needed, without issue.
>
maybe
last stable version at
https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102291.html
next step might be to change flushing so the index can be accumulated
during writes or mini flushes, so as to be able to limit the total size of
an index per flush if needed, without issue.
probably import
last stable version at
https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102291.html
although there can be further optimization, I think it makes sense to move
a little toward using it for me
next step might be to change flushing so the index can be accumulated
during writes or mini flushe
works now
I think there's a little redundancy in use of the Chunk structure not
certain
would make sense to add bisection to access spots in the sorted lists
rather than linear iteration
the number of children per node is staying about equal to the tree depth,
1024 flushes of 1-32 lengthy writes
On Sat, Jul 9, 2022, 8:56 AM Undiscussed Horrific Abuse, One Victim of Many
wrote:
> stable version:
> https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102232.html
>
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are writ
stable version:
https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102232.html
i find it cool how i found a solution here of placing asserts
practically every line. i'm thinking of how that would help stability
of things in general, and with some work maybe other logical projects.
# block t
i believe this was the last one that ran without failure. it turned
the writes into a linked list:
https://lists.cpunks.org/pipermail/cypherpunks/2022-July/102232.html
now to progress through the dense bugs
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.
# consider we have an existing tree
# with say m
for leaf_path, leaf_offset, leaf_data in leaves:
# copy over the previous index
# with:
# 1. for areas missing, reference the previous
flush for data
# 2. for sequences of more than one entry and max
depth would not
# find leaf count and leaf depths
leaves = [*self.leaves()]
max_depth = len(leaves).bit_length()
current_entry = None
for leaf_depth, leaf_offset, leaf_data in leaves:
# leaves that have depth that is too deep can be added
subgoal met, passes test
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.
# consider we have an existing tree
# with say m flushes, conta
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.
# consider we have an existing tree
# with say m flushes, containing n leaves (or m lea
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.
# consider we have an existing tree
# with say m flushes, containing n leaves (or m lea
here's another bug bandaid
this one got through the 3rd flush
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.
# consider we have an exis
i added debugging statements and fixed 1 bug i think
now it seems it continues yielding leaves after it reaches the end
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be a
i dropped the writes to 1/flush and it failed on the 3rd flush
more logic errors engaged
the first pass works now
thinking it does too many passes to actually make a graph, which is
fine for now since flushing after the first is failing.
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are writ
i failed to consolidate the writes before flushing.
here is version with different crash. previous one was because the
leaf iterator did not handle sparse data. so i just filled the
structure with 0s before starting.
# block tree structure
# blocks written to are leaves
# depth is log2 leaves
# when a flush is made, all blocks are written, and al
latest topical post below. code reattached with 1 change that does not
fix the run error.
On 7/7/22, Undiscussed Horrific Abuse, One Victim of Many
wrote:
> here's what i have now
>
> its intent is to test [the correctness] of iterating leaves when the
> structure is just appending, without using
wrong thread
i am receiving a mail bomb
i don't know how many mail bombs i have received, or how i used to
handle them when my mind was more together
i imagine it is normal to filter them out.
here's what i have now
its intent is to test [the correctness] of iterating leaves when the
structure is just appending, without using a tree.
there is presently a logic error demonstrated when it is run. there
may be many, many logic errors in it.
i called the confusing class name "Chunk". seem
On 7/7/22, Undiscussed Horrific Abuse, One Victim of Many
wrote:
> often i try to implement an append-only random-access data structure.
clarification: the intention of the structure under design is to be
stored on append-only media, but support random writes as well as
random reads. similar to
often i try to implement an append-only random-access data structure.
it's a fun puzzle for my confused mind. i never finish it, but today i
thought about it a little bit and i think over the years i might be
able to put together a simplified form and maybe make it exist some
day.
here is where i
33 matches
Mail list logo