actually lossylossless is merely lossy with object recognition
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M76c14dfbf2fea1b4fc5ab634
Delivery options: https://agi.topicbox.com/groups/agi/subsc
ya there must be some existing hybrid out there
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M36373ea8baeb3618fc706a8e
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Explosive logorrhea, that's when you can't control your decompression.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M3baa707ada6df09d157b7fd7
Delivery options: https://agi.topicbox.com/groups/a
Are there any unbiased lossy compression algorithms? I would speculate that
it's possible but practically assume there are none.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M37ca97f959126737f
Kolmogorov Complexity is estimable it helps.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M159f94abeab3cc9962f23249
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Thursday, October 10, 2019, at 2:26 PM, James Bowery wrote:
> KC can be approximated *only* with lossless compression.
Thanks for that valuable tidbit, but did you imply earlier that lossy and
lossless are mutually exclusive? I'm not convinced of that but am not an expert
even though I s
Well at the time I thought I had theoretically developed a general lossless
compression algorithm but then I wound up in the hospital because I drove
myself into the ground so it brings back stressful memories... I usually stop
at this point and listen to others...
--
yeah it doesn't matter they both have their place and can be mixed though there
is some crossover but that might just be special cases of each... still makes
you wonder though :) Why invite distraction
--
Artificial General Intelligence List: AGI
Perma
well lossy can be lossless but lossless can only be lossy when lossy is
lossless...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-Mb0eaf38d8a3424c8f3fa4748
Delivery options: https://agi.topicbo
Yeah yeah yeah that's why cryptos puked after that fake news came out.
Recovering now.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M175d619519bcad5af771be1c
Delivery options: https://agi.top
Need an Airsoft version...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T48b3b969c849a2e5-M278f7dfc9bf02b0433b19a01
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Don't forget the gun is good:
https://www.youtube.com/watch?v=TVakHZp5ZBE
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T48b3b969c849a2e5-M5ec2383e22ef772583c74ec6
Delivery options: https://agi.topicbox.com/grou
The brain has its own language, we all have particular dynamic idiolects of
that. Natural language is shared symbols with commonly agreed upon
approximations transmitted inter-agently for re-rendering concepts. These
shared symbols are examples of lossylosslessness, like Chinese symbols lossily
Why the human is so quiet? Or are we supposed to fill in the blanks being
conscious dice playing beings?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T22987f362748a39b-Mcab5a57fc0b869855dc64ea6
Delivery options
On Wednesday, October 23, 2019, at 12:26 PM, keghnfeem wrote:
> How to stay focused for longer:
Nootropics
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T214348ea19c23aa2-Mf4efe3d7c8f79edabda8329c
Delivery opti
On Friday, October 11, 2019, at 3:14 PM, James Bowery wrote:
> Most urgently: To head off imminent civil war by, instead, resolving disputes
> over consequences of social policy to be resolved with the degree of lossless
> compression as the model selection criterion.
> 100 million people will di
We, humankind, create our perception of the universe. What it really is or
looks like is undetermined. We can make it whatever we want.
The knowledge structure of science is perpetually incomplete and looking
backwards in time often wrong but practical contemporarily.
Why is that?
Small progra
We are all subservient to buggy code.
Advice to newborns: Accept your predetermined role as a dispensable beta
tester of this computational world. Imperfection is why you have arrived here
and why you will leave someday.
--
Artificial General Intelligen
Yes but the predictors are getting more and more accurate. All the body
language, micro-expressions, electrochemical and electromagnetic emissions,
historical big data, it will be nearly impossible to deceive… and then future
people will be modeled and predicted.
Your predicted future can be y
Yes but there is that adolescent rebellion to deal with. It could do the
opposite so reverse psychology might be needed.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T22987f362748a39b-M5452a3408d1fc5594c80472d
Intelligence is percepted, and perception is intelligenced.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Mf3a87918fea0a094c347e62d
Delivery options: https://agi.topicbox.com/groups/agi/subscrip
Magic is not absolute, it's local.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-M166921cfa1c146ba8dfa93b2
Delivery options: https://agi.topicbox.com/groups/agi/subscription
You need to have accurate ways temporally to disregard accuracy.
Muilt-modelling based on computational resource availability choosing between
quick and accurate.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T
All models are approximations and there is compute and time distance from the
modeled.
And then the model checker is important, I pursue dynamic models and dynamic
checking:
https://en.wikipedia.org/wiki/Model_checking
--
Artificial General Intelligence Li
Overrides to prevent madness, I would say using logic as an out-of-band
reasoner to limit it... but then the logic could distort.. argh...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T22987f362748a39b-M9ded79f
If you model the unknown it is not exact except for what you know. So you have
to model check on what you know or model check your prediction... if that makes
sense... who knows... checking...
--
Artificial General Intelligence List: AGI
Permalink:
https:/
With mainstream AI it's a love-hate relationship. From nothing you can spin up
in minutes colossal compute resources in their clouds and then shut it down and
pay a small fee. It’s all scriptable.
“How I Learned to Stop Worrying and Love the Cloud”…
--
Art
For that matter if you can prove that the universe can be modeled from a few
bits of cellular automata then all models are complete and exact?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M1ae
Hand picked models are fine but with general intelligence you more need models
that can model multiple things. Your chess modeler might model a subset of all
games for example chess, checkers, bridge, etc... Or that modeler might produce
model instances one of which would be chess. Or say a hier
Yes I agree with that also, and doing that you still need some basic Model
Selection Criterion though it could be very simple.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mb3b21ffa87621e5c37d
Accomplishing that goal is much less daunting yet can lead to generalization
and provides valuable reward feedback to the engineers :)
Pre-pack like 50 models then SOM map from eye to model as selector?
--
Artificial General Intelligence List: AGI
Permalink
50, 100 whatever. Build and retain unrecognized model graphs in memory... robot
encounters revolving door, model not pre-packed so SOM graph stored for future
encounters. Robot stuck in revolving door signals for assist avoids door next
encounter. Just a fancy FSM.
It helps to architect within
On Monday, October 28, 2019, at 3:14 PM, Stefan Reich wrote:
> Uh... what?
Knowledge of science and technology doesn't progress evenly across society.
There are local minima and maxima. Some people can be deceived with scientific
"magic" or tricks of deception unknown to the observer.
Like the
On Monday, October 28, 2019, at 5:29 PM, immortal.discoveries wrote:
> Cavemen would think the iphone is *magic*.
> They'd be totally *confused *over how it works.
I don't know, they would be smarter than monkeys and monkeys use it no problem:
https://www.youtube.com/watch?v=K3T-uvSHfdo
--
On Monday, October 28, 2019, at 9:10 AM, doddy wrote:
> did anyone study
> google plaNet.
In Python, link: https://danijar.com/project/planet/
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T10119d5c27aad6be-Mb
On Tuesday, October 29, 2019, at 5:51 AM, Stefan Reich wrote:
> > Lossless compression is ALREADY lossy compression, because after
> >compression, the data is missing, until you decompress it.
>
> That's super quotable. I'm still laughing
It's funny but at the same time I do believe analyzing th
This is serious!
The two spheres, lossy and lossless, have a capillary bridge consisting of:
Lossylosslessness and Losslesslossyness
The universe of compressors and the universe of data, loss depends on the data
AND compressor instance.
As an aside what do you call the equivalent of, in modelli
On Tuesday, October 29, 2019, at 3:06 PM, immortal.discoveries wrote:
> If we apply Lossy Compression on a text file that contains the string
> "2+2=4", it results in missing data because the new data is smaller in size
> (because of compression).
You are assuming something about the observer wh
Oh I see!
That's actually pretty creative. I don't think I ever thought of it that way.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M0c5d5d0637a0d6abdb5f9c5a
Delivery options: https://agi.top
Yes lossy effectively leaves it up to the observer and environment to
reconstruct missing detail.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mb9d21ac44a0641c924c2d953
Delivery options: https
What is the big picture lossy :) Everything is a piece of something else.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mce05e55fa6b5eb3c553c8bb6
Delivery options: https://agi.topicbox.com/grou
On Tuesday, October 29, 2019, at 12:25 PM, WriterOfMinds wrote:
> Lossylossless compression and losslesslossy compression may now join partial
> pregnancy, having and eating
> one's cake, and the acre of land between the ocean and the shore in the
> category of Things that Don't Exist.
>
When
I think that there are size ranges for things to happen. Regions of particulate
densities, cloud thicknesses, there are separatedness expanses to operate in
for many things. State changes are gradual in many cases though there is
definitely abruptness. Chaotic boundaries I suppose...
---
Well you could have a compressor that starts off lossless then intelligently
decides that it needs to operates faster due to some criteria and then
compress particular less-important data branches lossily. Then it would fall
into the middle ground no? A hybrid.
And vice versa, on decompressio
If lossy or lossless is crisp an A Priori or A Posteriori definition cannot be
determined unless the complexity of all compressors are partitioned but the
decompression results are not known until execution on all possible data...
which is impossible.
FWIW. So I suspect they're fuzzy and not
On Friday, November 01, 2019, at 3:48 PM, immortal.discoveries wrote:
> Death improves U.
Death. The inevitable lossy compression but if you have a soul it could be
lossylosslessness HEY!!!
--
Artificial General Intelligence List: AGI
Permalink:
https
Out of curiosity I did a little research and there are several existing hybrid
compressors, those that combine lossy and lossless methods. Those compressors
have various means of switching between two algorithms. One can understand the
reasons for this on some data like medical imaging, seismic,
On Saturday, November 02, 2019, at 9:05 PM, Matt Mahoney wrote:
> The vast majority of strings do not have any description that is shorter than
> the string itself, and you would have no way to know.
Very profound.
But once it is suspected it is expected! If all compressors were wired into a
s
One should always keep a little fear in his/her backpocket.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mbe5b2fb27fe97cd0ebc66f3d
Delivery options: https://agi.topicbox.com/groups/agi/subscrip
One entity's failure is another's success!
dog eat dog
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Md1cf14c95a35d86c9e9d4706
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Fearful of the lack of fear for the endless pursuit of perpetual motion?
Yes.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M044a469b0adf633322cf1584
Delivery options: https://agi.topicbox.com/
Partitioning into crisp boolean could be interpreted as pulling fear out of
your backpocket.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mec8ff2b4b5ecebe6ac016163
Delivery options: https://ag
It would be interesting Venn'ing out all the AGI theories and see how they
overlap. Some people tout theirs against others (I won't mention any names
*cough cough* Google) but I don't do that...
--
Artificial General Intelligence List: AGI
Permalink:
http
On Monday, November 04, 2019, at 8:39 AM, Matt Mahoney wrote:
> JPEG and MPEG combine lossy and lossless compression, but we don't normally
> call them hybrid. Any compressor with at least one lossy stage is lossy.
> There is a sharp distinction between lossy and lossless. Either the
> decompres
Couple hybrids, there's more where they came from:
https://arxiv.org/abs/1804.02713
https://www.semanticscholar.org/paper/LOW-COMPLEXITY-HYBRID-LOSSY-TO-LOSSLESS-IMAGE-CODER-Krishnamoorthy-Rajavijayalakshmi/20657ef592513af2e4e2d6907295eb0e3dc206b0
--
Artif
On Monday, November 04, 2019, at 10:05 AM, rouncer81 wrote:
> So J.R. whats so good about hybrid compression?
Real world issues where max compression isn't the goal but an efficient and
inter-communicable compression is. Things aren't as clean cut like files on
disk.
---
On Monday, November 04, 2019, at 11:23 AM, rouncer81 wrote:
> and basicly what im doing is im reducing permutations by making everything
> more the same.
>
Increasing similarity.. within bounds... good one.
--
Artificial General Intelligence List: AGI
Perm
On Monday, November 04, 2019, at 12:36 PM, rouncer81 wrote:
> Lossylossnessness, total goldmine ill say again. Dont doubt it. :)
Picture this - when Charles Proteus Steinmetz proposed using imaginary numbers
for alternating current circuit analysis everyone attacked him and thought he
was coo-
On Monday, November 04, 2019, at 1:12 PM, rouncer81 wrote:
> I SAY AGAIN! THE SECRET TO THE SINGULARITY IS NOT GOING TO BE THAT HARD TO
> DO! if someone rickrolls it with some simple device, dead, killed it is by
> him. not very impressive anymore.
I agree. Also, it might be totally obvious
There are a number of compressors that categorize themselves as
"near-lossless".
For example:
https://arxiv.org/abs/1801.07987
https://arxiv.org/abs/1804.09963
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T25
Turns out there is an official category of "near-lossless" where you limit the
error.
http://web.stanford.edu/class/ee376a/files/scribes/lecture5.pdf
Still nonsense? GTFO
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/gro
I'm at a loss for words I was once loss and now have been found.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M057de39e52e6f459bc037f21
Delivery options: https://agi.topicbox.com/groups/ag
On Monday, November 04, 2019, at 4:17 PM, James Bowery wrote:
> This is one reason I tend to perk up when someone comes along with a notion
> of complex valued recurrent neural nets.
Kind of interesting - deep compression in complex domain:
https://arxiv.org/abs/1903.02358
---
Yes, and another official category in the world of compression that fit's into
the lossylosslessness umbrella and it is called "Perceptual Lossless". This is
different from "Near Lossless" and it is self explanatory and can be visual,
audio, and one might imagine extending it to olfactory and ta
Question: Why don't the compression experts call near-lossless and
perceptual-lossless lossy?
Answer: Because you don't know. They could be either though admittedly high
probability lossy.
How do you know something is conscious? It could be perceptually conscious but
not really conscious.
So l
Good idea James. A lot of research going on with AGI and consciousness. Matt
may want to Google around a bit to get updated.
I do wonder Matt, if something is "perceptually lossless" why would you call
that marketing? You can't really call it lossy can you?
--
With consciousness I'm merely observing functional aspects and using that in
building an engineering model of general intelligence based on >1 agent. I feel
consciousness improves communication, is a component and is important. And even
with just one agent it's important IMO.
If you think about
On Wednesday, November 06, 2019, at 9:52 PM, Matt Mahoney wrote:
> The homunculus, or little person inside your head.
Or like Dennett's homuncular hordes. The power of the many.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.co
On Wednesday, November 06, 2019, at 10:58 PM, immortal.discoveries wrote:
> Every day we kill bugs. Because we can't see them, nor do they look like us.
It's tough with insects and small creatures. Where does one draw the line? I
do think they have some consciousness perhaps AGI should have Ahim
Ha! I have the opposite problem, believing too much.
Like, I believe I can create an artificial mind based on an I Ching computer.
So tempted to drop everything and go for it. Who needs all this modern science
malarkey?
COME ON!! DO IT!!! DO IT NOW
-
On Thursday, November 07, 2019, at 10:30 AM, WriterOfMinds wrote:
> The compressed output still contains less information than the original,
> ergo, it is lossy.
Naturally if you have the original raw data to compare. You almost never do,
that’s why you compress. For example, some compressors bu
That worm coming out of the cricket was cringeworthy. Cymothoa exigua is
another.
It’s not the worm’s fault though it’s just living it’s joyful and pleasureful
life to the fullest. And the cricket is being open and submissive.
I think there are nonphysical parasites that effect human beings...
On Thursday, November 07, 2019, at 11:34 PM, immortal.discoveries wrote:
> "consciousness" isn't a real thing and can't be tested in a lab...
hm... I don't know. It's kind of like doing generalized principle component
analysis on white noise. Something has to do it. Something has to do the
c
Perhaps we need definitions of stupidity. With all artificial intelligence
there is artificial stupidity? Take the diff and correlate to bliss
(ignorance). Blue pill me baby. Consumes less watts. More efficient? But
survival is negentropy. So knowledge is potential energy. Causal entropic force?
On Thursday, November 07, 2019, at 1:30 PM, WriterOfMinds wrote:
>> Re: John Rose: "It might be effectively lossless it’s not guaranteed to be
>> lossy."
> True. But I think the usual procedure is that unless the algorithm guarantees
> losslessness, you treat th
There will often be loss in perceptual lossless for some humans. Some humans
see in the dark and some have hypersensitize hearing so the perceptual lossless
will be perceptual lossy to them. But it depends, do they want perceptual
lossless with unperceptual lossy or perceptual lossy with unperce
I think we're capable of distinguishing between p-zombie and zombie. That's why
they threw the p in front of it if you read the background.
Also there seems to be some sort of reluctance to incorporating p-zombie
concepts into engineering concepts by some individuals. As if philosophical
concep
Your post is becoming parasitically zombified by p-zombie impostors. Wait, can
you have a zombie p-zombie? Ooops... the rabbit hole watch the rabbit hole.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T251f13454
That's because you're "god" and the universe that you created is like this
little soft marble inside your head against a black background. If you zoom in
closely you can see me waving as I write this email, see me? "Helloo halloo I'm
here!!! Can you fix this sh*t please??"
--
We might go through a phase where our minds occupy the minds of robots, remote
control, before we get to AGI automating human labor. One person can occupy
many robots simultaneously. Multiple self-driving cars can be occupied by one
person. Imagine wireless connections to the brain to the intern
On Tuesday, November 12, 2019, at 11:07 AM, rouncer81 wrote:
> AGI is alot pointless, just like us, if all we end up doing is scoring chicks
> what the hell was the point of making us so intelligent???
Our destination is to emit AGI and AGI will emerge from us and then we become
entropy exhaust.
True. And why bother learning to write with your hand when you can just wave
the magical smartphone wand while emitting grunts?
It's like a purpose of AI is to suck the intelligence out of smart monkeys then
resell it when it's gone. Net effect? Mass subservient zombification with
parasitic AI
Hey look a partial taxonomy:
http://immortality-roadmap.com/zombiemap3.pdf
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M83b94db32a801fb28236948c
Delivery options: https://agi.topicbox.com/gro
Don't want to beat a dead horse but I think with all this discussion we have
neglected describing the effects of... drum roll please:
*Quantum Lossylosslessness*
Feast your eyes on this article
https://phys.org/news/2019-11-quantum-physics-reality-doesnt.html
I was thinking this discovery could be used to speed up PCA related
eigenvector/eigenvalue computations:
https://arxiv.org/abs/1908.03795
Thoughts?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td8188432ee4a4
I enjoyed reading that rather large paragraph. Reminded me of Beat writing with
an AGI/consciousness twist to it.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2175067dad4afab3bc90eec9
Deliver
Errors are input, are ideas, and are an intelligence component. Optimal
intelligence has some error threshold and it's not always zero. In fact errors
in complicated environments enhance intelligence by adding a complexity
reference or sort of a modulation feed...
---
Compression is a subset of communication protocol. One to one, one to many,
many to one, and many to many. Including one to itself and even, none to none?
No communication is in fact communication. Why? Being conscious of no
communication is communication especially in a quantum sense.
---
On Monday, November 18, 2019, at 8:21 AM, A.T. Murray wrote:
> If anyone here assembled feels that the http://ai.neocities.org/Ghost.html in
> the machine should not be universally acknowledged as the Standard Model, let
> them speak up now.
It's just so hard for us mere mortals to read the code
Singularity being near is another form of immanentizing the eschaton
https://en.wikipedia.org/wiki/Immanentize_the_eschaton
Personally, I prefer heaven but many of those with strong scientific and
technical inclinations perhaps need something they can more relate to and
perhaps even create.
---
How about a similar app doing this across concepts instead of faces. Like..
isms. Create a topology of isms e.g. communism and capitalism then blend the
concepts for an observer using similar techniques of synthesis with little
knobs and sliders to facilitate a comprehension. And give the compr
More of a stream of consciousness with less emphasis on intelligence to allow
higher bandwidth thus higher error rate where errors are used to modulate but
allow abrupt changes of perspective.
--
Artificial General Intelligence List: AGI
Permalink:
https:/
Yes! So you paste text then adjust all the sliders to bring it into what you
want. Languages, dialects, pidgins tonality, emotions,...
Then with text to speech adjusters for voice control.
--
Artificial General Intelligence List: AGI
Permalink:
https:
Yeah yeah real-time font morphing need that too. Auto-morphs while you're
reading to convey more information. Tracks what words your eyes are on and
defocuses words farther away for a zooming perceptual effect.
You get to a word like "explosion" and it actually explodes when your eyes hit
it an
Could be a Bomb of Good eh?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td8188432ee4a4f8c-Maf786a16f24dc51b69cc25c2
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Tuesday, November 26, 2019, at 9:50 AM, rouncer81 wrote:
> Im with you on the religious crap, makes me friggen feel like a small ass
> kisser... its probably the truth tho but i hate it anyway.
You may have a point there, a disbelief in a god relegates one to kissing small
assess, Occam sty
Noise and error. That's where all the answers are, if the agent can afford to
redirect and expend computational resource, if the metacriterion convinces
though a perception of potential gain... muscling past the chaotic inflection
leveraging directed self-organizing potential.
--
On Tuesday, November 19, 2019, at 11:08 PM, TimTyler wrote:
> Animals are made out of cells due to historical constraints. Engineers
don't have the same set
of constraints. They often don't make things out of lots of tiny
self-reproducing pieces.
It's not to late to start!
Maybe there's a para
On Sunday, November 24, 2019, at 5:47 PM, Matt Mahoney wrote:
> http://mattmahoney.net/dc/
>
> My paper on the cost of AI is probably the most relevant to this group.
I think this needs to be expanded upon with lossylosslessness and
losslesslossyness catageories.
---
That's just dumb. Video games have been around for decades where people play
the computer and the computer can easily beat them but they are tuned for
competitive entertainment.
It's like saying I'm quitting running in marathons because machines are faster,
or let's cancel the memorize Pi compe
101 - 200 of 760 matches
Mail list logo