keghnfeem
10:55 AM (4 hours ago)
Actions
Huffman coding\n for data compression.
    nibble system
https://www.techiedelight.com/huffman-coding/

                                          |                                     
       
                                          /\                                    
       
                                        /     \
                                       0     /  \   1                       
                                             /      \                           
      
                                          /           \ 
                                     10               \    11  
                                                       /  \                     
             
                                                     /      \                   
                 
                                                  /           \ 
                                            110            1 11      

DNA id s nibble base number system.

000 could be A

So could be the following for now:
00 =A
01=B
10=C
11=D

So if we had a saved file of randomly generated nibble and the average amount 
of A's,
B's, C's, and D's are all in equal amounts then randomation is perfect and 
there is no
repeating pattern and clumping. 

Nibble is a two digit seed
3:28 minutes into the video:
https://www.youtube.com/watch?v=itaMNuWLzJo


A completely random nibble file would have only seed sequences of:

abcd abdc acbd acdb adbc adcb
bacd badc bcbd bcdb bdac bdca
cabd cadb cbad cbda cdab cdba
dabc dabc dbcd dbdc dcab dcba

or  24 seeds which means complete chaos and maximum. Which is a five bit value.
So A chaos algorithm be used to get even more compression.
Example abcd is eight bit in size.

If Huffman coding is used again the file would get bigger.

Eight bit would turn into eleven bit seed value.


If the file was very compressible then it would have a lot of seed fragment:

ab ac ad
ba bc bd
ca cb cd
da db dc                      12

abc acb abd adb acd adc
bac bca bad bda bcd bdc
cab cba cad cda cbd cdb
dab dba dac dca dbc dcb        24


When a steady five value is bit streamed into a detector and and is subtracted 
from
previous sampled, the values should be zero. BUT if look at it through a 
microscope and
record them into a file and they bounce at around at a  two bit value and if 
there is a equal amount of recorded
a, b, c, and d values and no clumping of them then perfect noise has been found.


Brownian nose. Ya, try to compress that.
Distribution can be done with a traveling window nested traveling window 
algorithm. One
for your seed size, bit, nibble, byte, word, or other. The other is size of the 
window or the
amount of date taken in at a time. This will give a model of many levels but 
only dwell
on the sweet spot that give meaning full information in P versus NP time.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T07349206c4d4db02-M0afbc340faa767be3b2696fa
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to