[agi] Oh boy....

2024-06-09 Thread Alan Grimes via AGI
I'm sorry for having to bring all these other subjects onto the list but 
I really love AGI people and I want you guys to be ready for what's 
coming. As it turns out, the dipshits in Washington DC are both 
infinitely stupider and infinitely eviler than anyone has given them 
credit for, heck even more so than would seem to be physically possible.


It has gotten so bad that they've driven Putin to the point where he 
believes he must provide a practical demonstration of a modern nuclear 
arsenal. He is now planning a tactical strike on military targets that 
he feels that are most threatening to his country. The tea-leaves point 
to July 18 (+/- 3 days) as the date of this event. No sane person wants 
this to happen. The question is whether the assholes in Washinton will 
escelate. If not, then we will be spared.


In all cases, this will be a single-day event. IF YOU RECEIVE ANY KIND 
OF WARNING FROM RUSSIA THAT YOUR AREA IS TARGEGETED HEED THAT WARNING!! 
MAKE SURE YOU ARE AT LEAST A FEW HUNDRED MILES FROM ANY SUCH TARGET ZONE.


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfad15c64a6d3c7ed-M6b228585adbaccff1265773e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Heads up

2024-05-17 Thread Alan Grimes via AGI

This is a PSA.

Just wanna tell you guys to lock your tray tables in their upright 
position, make sure your seatbelt is sinched tight and, um, assume the 
crash position.


Yeah, it's time.

Silver is at $31.43 which means it is decisively above the red line of 
$30. Which means the party has started. The pattern the price riggers 
have established is that their standard market interventions are always 
done on Wednesdays and emergency adjustments are sometimes done over the 
weekends. There is a small chance that they will, again, be able to push 
the price down into the $2X range. If we are still above $30 by midday 
Monday then consider this signal confirmed. This WILL bring down the 
entire financial system, all of it. Banks, derivatives, currencies, 
equities, debt instruments, all of it. Judging from the feel of things, 
I expect a CBDC to be introduced roughly the first week of June and then 
fail, along with the government itself, by the end of September.


https://silverprice.org/

Times will be tough as changes will be both rapid and dramatic. The 
chance of a Boogaloo as I was worrying about several years ago is low at 
this point though a great many people will have severe and irrational 
emotional reactions that would normally be quite out of charactor for 
them. Be prepared for this! Yes there are guilty people out there, THEY 
MUST BE BROUGHT TO TRIAL!!! WE NEED TO DOCUMENT EVERYTHING THAT HAS BEEN 
DONE TO US, DON'T LET THEM TAKE THEIR SECRETS TO THEIR GRAVES!!! That is 
the only way we can restore human civilization to a healthy state. YOU 
will want to run out and lynch every single one of them; don't! We need 
information a hundred times more than vengance! The pubilic executions 
can begin the day after we are sure we have found all of the 
consiprators with none remaining in any position to hurt us again.


Once again, I am expecting 4-5 months of very limited food availability

--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T92315ac1d2bf90d9-M03c28320d9d69bcb4e055ca5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] To whom it may concern.

2024-05-15 Thread Alan Grimes via AGI
I was banned from the singularity waiting room discord today for trying 
to issue a warning about an upcoming situation. When I am eventually 
proven right, I will not recive an apology, nor will I be re-admitted to 
the group. I'm sorry, but the people with control over these decisions 
are invariably the most ban-happy people you can find, they basically 
never have the patience to investigate or ask questions or implement any 
kind of 3-strikes policy. The last thing I was allowed to say on the 
server was a call for trials instead of the lynch mobs that will be 
forming in the fall of this year...


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T18515c565721a5fe-M89a285b75c48aeec253ec875
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Hey, looks like the goertzel is hiring...

2024-05-01 Thread Alan Grimes via AGI

 but not from this list. =|

Goertzel explains his need for library programmers for his latest 
brainfart, I think his concept has some serious flaws that will be 
extermely difficult to patch without already having agi... Yes, they are 
theoretically patchable but will said patches yield net benefits?.


But, once again, it must be restated with the greatest emphasis that he 
did not consider the people on this list worth deiscussing these job 
opportunities with. It should also be noted that he has demonstrated a 
strong prefferance for third world slave labor over professional 
programmers who live in his own neighborhood.


https://www.youtube.com/watch?v=CPhiupj9jyQ

--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M27cb5a3c960252de55e3a52a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-21 Thread Alan Grimes via AGI

Matt Mahoney wrote:
Maybe because philosophy isn't real science, and Oxford decided FHI's 
funding would be better off spent elsewhere. You could argue that 
existential risk of human extinction is important, but browsing their 
list of papers doesn't give me a good feeling that they have produced 
anything important besides talk. What hypotheses have they tested?


Science is a branch of philosophy, classically referred to as "natural 
philosophy". A local science club was founded in 1871...


https://pswscience.org/about-psw/


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M1c757ea607e123f2709de401
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-17 Thread Alan Grimes via AGI

Keyvan M. Sadeghi wrote:


throw 18yo catgirls at it




Yeah I wonder if that actually solves it. The problem is they're too 
old to get it hard and too stupid to use Viagra.


It's a stage play. I think Iran is either a puppet regime or living 
under blackmail. The entire thing was done to cover up / distract from / 
give an excuse for the collapse of the banking system. Simultaneously, 
the market riggers ran 1.4 billion ounces of silver derivatives through 
the market to keep the price from rising above $30/oz.


https://www.bitchute.com/video/X7ysTPuYvvQJ/

--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-Macfd34cb3082982a58054817
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Entering the frenzy.

2024-04-05 Thread Alan Grimes via AGI
Menlo Park VCs are connected to banks like the failed Silicon Valley 
Bank where the capital is setup to flow freely on insider agreements 
with the Federal Reserve. If you highlight your project with ESG, DEI, 
etc. you will get favored status and the money flow is relatively 
unlimited. Until more banks fail which is coming soon as there is a 
massive move towards bank centralization. Also, we are moving towards 
a war economy as the last vestiges of value in the currency are 
getting expended in defense of itself which makes one wonder if 
intelligent war machines have some value in having a consciousness.


Yes, you make some valid points. As you see I am having trouble keeping 
track of all of the moving pieces of the omnicrisis. The singularity is 
clearly #1 on the importance scale where the banking crisis is probably 
next up on the timeline followed by widespread food shortages over the 
summer and the Vaccine Epiphany that will probably be in motion late 
summer followed by Terrible Retribution through the following 
months/years/decades as the Guilty are hunted down and removed from this 
mortal coil. The collapse of the US government is penciled in for 
roughly September.


The problems with the banks are why I haven't been actively job seeking 
the last few years. =\


I guess I'm just getting antsy...  There are a lot of questions about 
how AGI and proto-versions thereof will be deployed to the public. The 
course we seem to be on is that the politico-economic elites will 
drip-feed us with the mana from the AI with an eyedropper. =\ At least 
that's what Sam Altman seems to want to do.


I would tend to argue that a properly conscious AI system could 
potentially be substantially less dangerous than a powerful system that 
has substantial gaps in its perceptual capabilities.


I've gotten so hype about even the proto AGI systems that I want to blow 
$20k on my personal workstation here... (existing mobo is failing, sound 
chip burned out so now using a GD SoundBlaster..), last time I updated 
bios was in 2020, still 24 core threadripper with 96GB ddr4.. want = 32 
core Threadripper WS, 0.5TB  DDR5.


It's difficult to decide whether this is actually a good investment:

It could let me run more advanced systems locally earlier, but who says 
I don't need a $250k machine or more, then early prototype computronium 
could be available as early as 2027... Still it probably is time to 
refresh my desktop here question is with what...


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc72db78d6b880c68-Mfa42eb931a30a1d83fe4e94b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Entering the frenzy.

2024-04-04 Thread Alan Grimes via AGI
These days news about AI topics are coming in at a frenzied pace. There 
is so much activity in the field at the moment that the only thing a 
reasonable person can do at the moment is hang on for dear life, only a 
lunatic would try to launch a new venture at this juncture.


So let me tell you about the venture I want to start. I would like to 
put together a lab / research venture to sprint to achieve machine 
consciousness. I think there is enough tech available these days that I 
think there's enough tech to try out my theory of consciousness. For the 
sake of completing the project, all discussion is prohibited. If you 
mention the Hard Problem, then you're off the project, no discussion! I 
want to actually do this, go ruminated hard problems for the next ten 
millinea, I don't care. You are allowed to argue with me but I have 
absolute authority to shut down any argument with prejudice.


The problem of testing my theory of consciousness we'll have to 
integrate a bunch of cutting edge tech in a near real time system. The 
goal is to produce a system that exhibits consciousness in a much 
stronger and satisfying way than any competing system. The proposed 
consciousness solution probably won't solve sapience (which requires 
high level reasoning) but just being able to LLM chat with the agent 
should be a fairly compelling experience that should be enough to get me 
more funding.


 I like money. 

It will lequire a good VR simulator, preferably multi user / multi agent 
where other competing systems can be tested and compared. It will 
require the tight integration of a variety of cutting edge systems and 
maybe a new algorithm or two that shouldn't be tooo tough. I heared that 
if you shook a tree in Menlo park a VC would fall out. Anyone know some 
good trees to shake?


I'm going to need to figure out the finances. I should have enough to 
travel, seek VC and stuff, open an office for a few months, but that's 
about it. (The Silver Cowabunga play seems to be in motion so I could be 
fantabuously wealthy RSN), but still in need of AGI. Regarding the 
silver cowabunga play, BEWARE OF FALSE SELL SIGNALS!! Silver is your 
life-raft to the Other Side. Then it's time to liquidate and invest


I think the project can reach working prototype stage for $70-100M . No 
idea how I would market a conscious NPC... Naturally next steps would be 
to implement more algorithmic ideas I have to get closer to full 
sapience, and ultimately ASI, then my focus will be on neural 
interfacing and other more shadowy aspects of my total world domination 
plans...  WHAT??? You don't have plans for total world domination??? 
What's the matter with you, get to scheming right this instant!!! Don't 
you know that super villians always have more HP than heros? It's not 
about being a dick, it's about having five bars of reserve health...


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc72db78d6b880c68-M56de6e6c60e1077abf218c90
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Singularity watch.

2024-03-25 Thread Alan Grimes via AGI
Ok, we have been in para-singularity mode for about a year now. What are 
the next steps?


I see two possibilities:

A. AGI cometh.  AGI is solved in an unambiguous way.

B. We enter a "takeoff" scenario where humans are removed from the 
upgrade cycle of AI hardware and software. We would start getting better 
hardware platforms and AI tools at some non-zero rate with non-zero 
improvements without doing anything... How far this could procede 
without achieving AGI as a side-effect is unclear, as our human general 
intelligence appears to be an effect of the evolution-based improvement 
process that created us. At some point even a relatively blind 
optimization process would discover the principles required for 
consciousness et al...


In any event it's time to get this party started... We are teetering on 
the edge of socioeconomic collapse and probably won't get another chance 
at this within my lifetime. =|


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T75b708e761eaa016-Me444d1066a83ab043c0d1d6d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Bard

2024-01-29 Thread Alan Grimes via AGI
Hey, I tried out Bard last night. It was a lot smarter than the 
"copilot" feature of MS Edge. It held its end of the conversation very 
well even though it got pretty deep into philisophical territory. I was 
a bit annoyed by its guardrails and the neo-communist rhetoric it was 
spewing. Lots of collectivist ethics that were more than a bit worrying.


https://g.co/bard/share/14bd15597838

I think the url is just bard.google.com

The limitation of that platform is that it doesn't have the deep web 
integration that MS Edge has. Ie, I couldn't get it to discuss my github 
page: https://github.com/AlonzoTG/palindromes23


Interestingly, GPT4 couldn't seem to comprehend the extremely large 
numbers that number theory code deals with, such as

24014998963383302600955162866787153652444049

--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb839f23cde3698b2-M29ca2bddd55f93a05e1b725d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Bard

2024-01-29 Thread Alan Grimes via AGI
Hey, I tried out Bard last night. It was a lot smarter than the 
"copilot" feature of MS Edge. It held its end of the conversation very 
well even though it got pretty deep into philisophical territory. I was 
a bit annoyed by its guardrails and the neo-communist rhetoric it was 
spewing. Lots of collectivist ethics that were more than a bit worrying.


https://g.co/bard/share/14bd15597838

I think the url is just bard.google.com

The limitation of that platform is that it doesn't have the deep web 
integration that MS Edge has. Ie, I couldn't get it to discuss my github 
page: https://github.com/AlonzoTG/palindromes23


Interestingly, GPT4 couldn't seem to comprehend the extremely large 
numbers that number theory code deals with, such as

24014998963383302600955162866787153652444049

--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T58e4c7a473e695f0-Mc9171e45788f9ee25f6c286f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Gemini

2023-12-08 Thread Alan Grimes via AGI

stefan.reich.maker.of.eye via AGI wrote:
People make a big deal about that, but I don't really think it's very 
relevant. Speech recognition is surely not the problem area. And if 
Gemini is too slow for realtime now, the next version definitely won't 
be. It's still an insanely impressive demo if the answers and the 
image recognition are real.


I don't know how much of the available information is actually... 
true... the information is there is enough for me to go out on a limb 
here and say that the thing is within a hair's breadth of being 
conscious. Obviously the thing requires a hell of a lot of examination, 
and it is a deeply flawed design WRT to actually being conscious but I 
think there is enough there to say a non-zero quantity of consciousness 
is present. I know we all want one of these: 
https://gmauthority.com/blog/gm/gm-engines/ls3/ but we can't ignore the 
fact that we might just have one of these 
https://www.flickr.com/photos/cv_dusty/4825158342 staring us in the face.


This is just too crazy. The fiscal armageddon is taking too long and 
it's squeezing the robot apocalypse, I want the former over with so I 
won't be having to deal with tons of socioeconomic bullshit while the 
robot apocalypse is unfolding. =|


--
Don't let the moon-men get you! =P
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf3ea8b5ba6ba08d0-Ma7ae448df2bcf5ca4cc710e2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] By fire or by ice.

2023-12-06 Thread Alan Grimes via AGI
I bring up economic issues on this list occasionally because it is the 
air that we all breathe, the ocean we all swim in (metaphorically 
ofcourse) and because biblical-scale events are in the pipeline.


My previous predictions were based on underlying trends but failed to 
account for the intentions of the existing central banks. My assumption 
was that the banks intended to bring in a CBDC on an accelerated 
timeline and thus would intervene only to steer the ongoing collapse of 
the banking system, and not to slow it down as much as they have been.


Apparently they are not actually ready to roll out the CBDC and are 
therefore trying to keep the existing system going a bit longer.


You need to understand that the world runs on debt. We live in a debt 
based system. I don't fully understand it myself but the system relies 
on an ever-increasing quantity of debt to give the illusion of 
functioning. This is absurd, we are now paying $1T/year on just interest 
payments, not roads, not bridges, not social programs, **INTEREST**. On 
the other side of the planet, foreign banks hold US treasury bills 
instead of gold as their reserves, and thus requires the US government 
continuously go more into debt to manufacture those treasury bills... 
Ofcourse the interest causes the debt to eventually explode and 
overwhelm the purchasers of that debt but you are not supposed to think 
about that...


The natural state of affairs is that as the credit rating of the USA 
continues to trend towards junk status, the foreign banks will unload 
their bonds for some other reserve, such as Gold which is the 
traditional reserve... In doing so, they will discover that many of the 
bonds they hold are actually counterfeit and the simple act of trying to 
redeme so many trillions of dollars of federal bonds against so few 
trillions of federal reserve notes will cause the system to freeze. Lets 
call this the death by ice scenario.


Right now there is strong evidence that the federal reserve is actively 
printing currency to cover the failing bond auctions and probably 
"injecting liquidity" into the system to prevent it from freezing up. 
This is hugely inflationary and will lead to hyper inflation and 
currency collapse. This is the death by fire scenario.


Whether it is even possible for the fed to print that much money is an 
open question. I want to be on the other side of the financial collapse 
(fiscal armageddon) as soon as possible. Right now we are just waiting 
to see whether the old system will die by fire or by ice... =\


--
Don't let the moon-men get you! =P
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdab60d6adcb6250c-M4b45d48bd7df2e1ece30fdad
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-04 Thread Alan Grimes via AGI

John Rose wrote:


In a nice way, not using gas or guns or bombs. It was a trial balloon 
developed over several decades and released to see how it would go. 
The shot that is, covid’s purpose was to facilitate the shots. It went 
quite well with little resistance. It took out 12 to 17 million lives 
according to conservative ACM estimates. I’ve seen other estimates 
much higher with the vax injuries in the 100’s of millions, not 
mentioning natality rates, disabilities and the yet to be made dead.


I'm hearing numbers up to 20 million...

It's been said that the collective IQ of humanity rises with every 
vaccine death... I'm still waiting for it to reach room temperature...


--
Don't let the moon-men get you! =P
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M7b1e5faaf0ce67ee81693c31
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Training excercises for AGI

2023-11-08 Thread Alan Grimes via AGI
My tiny little brain has gotten itself obsessed with the 8086 CPU 
because very few such computers were ever produced, most vintage 
computers (XT class) used the cut down 8088 CPU. The advantage the true 
8086 had being the ability to use a 16-bit data bus. Yes, I realize that 
spending time developing such a machine is pretty stupid at this point. 
Now what actually does make sense is designing a series of excercises to 
develop and refine AGI.


So you could think of it in trems of a training work-book. The preface 
would be that these are excercises to dvelop and refine AGI systems, 
human interraction, and technical competence through a series of 
progressively more sophisticated tasks. I'm in the mood to think about 
hardware today, but the essential pattern of progressively more 
challenging tasks seems robust across domains. The difference between 
AGI instruction and human instruction is that human instruction focuses 
on developing and training small competencies and slowly building up. 
AGI processees information differently and can manage a deep task-stack 
and therefore probably learns better by being given an over-arching goal 
and obtaining the information that would serve that goal.



## 1 ##

design and build a computer based on the 8080 CPU   (the 8080 family 
of CPU has been enormously popular for a long time, rivaling the 6802)
The machine is should be capable of being mounted in a commercially 
available chassis and reliably performing a task appropriate to its 
capacity.



This is a very loose spec  but gives many opportunities for 
human-machine interraction where the user asks the AGI questions about 
the design or offers suggestions or refines the request as things progress.



## 2 ##
Design an IBM PC-compatible computer according to standards established 
by the IBM PC-AT using an 8086 CPU, 8088 math processor, and 8042 system 
management processor.  (Features of the PC-AT class machine that are not 
available on the 8086 processor may be omitted.) The board must fit in 
an ATX chassis and support 8 and 16 bit ISA perihperals and be able to 
load and run DOS and DOS applications and must support existing 8 and 16 
bit expansion cards that don't require pins A20 - A23 or 286+ 
instructions to operate.



Basically, I'd continue in this manner until I was asking it to design 
machines that push the state of the art...



--
Don't let the moon-men get you! =P
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcca741d95e5d38cc-M5d549838553433de9aa4c7b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] kewl neuroscience updates:

2023-10-25 Thread Alan Grimes via AGI

This stuff upends some previous assumptions about neural science, (YAY!!!)

https://www.youtube.com/watch?v=24AsqE_eko0

--
Don't let the moon-men get you! =P
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdf424307570f4504-M4eb19ae65c664e1647c9da71
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The two brains hypothesis:

2023-10-11 Thread Alan Grimes via AGI
This video might be total bullshit but if it's not, its utterly 
mindboggling!


https://www.youtube.com/watch?v=sPGZSC8odIU


I don't know what to make of this.

Normally I consider myself a monist but if I willfully ignored potential 
evidence, I'd be an idiot. =\


What this seems to be saying is that some people may have access to a 
second mental substrate that exists "outside of the matrix" or adjacent 
to The Simulation. The organic brain, as conventionally understood, may 
or may not be capable of consciousness on its own. In all normal people, 
the bulk of thought is done by the brain but if it is injured (or 
absent!) in a patient with access to this seperate external brain, the 
external brain will take over. The external brain seems to have a high 
IQ (potentially more but limited by human culture), and is able to 
perform most of the functions of an organic brain but has greatly 
reduced emotional response. But the external brain is invulnerable to 
most things in the human realm and can associate with a fresh young body 
on death, though the rate of this occouring seems to be on the order of 
one in hundreds of thousands or worse.


I don't know.

I think AGI research can continue conventionally but the above 
hypothesis must be strongly addressed before contemplating human 
enhancement or mind uploading.


--
Don't let the moon-men get you! =P
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbba18f2053c913b4-M2a5256c158a018a6016dbbb4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] AGI and racism [was: Problem when trying to combine reinforcement learning with language models]

2023-08-21 Thread Alan Grimes via AGI

YKY (Yan King Yin, 甄景贤) wrote:
On Fri, Aug 18, 2023, 22:15 James Bowery > wrote:



Every second that ticks by without this happening is a crime
against humanity akin to Theodoric of York, Medieval Barber:

https://youtu.be/edIi6hYpUoQ?t=311



I know how to build AGI, I can almost do it alone, but I want to find 
some collaborators.  Which is why I brought up the problem of racism.  
I met some people here who would like to work with me but when I say 
the terms must include no racism they ask me why and then we were 
unable to proceed further.  And you said "every second that ticks by" 
is a waste of precious time.  So why don't you just say, "all right, 
no racism" ?  It seems that deep in some people's psyche, they would 
rather die than accept someone of a different race.



Well, a lot of us are in USA-istan where "racism" has taken on very 
communist overtones. It is basically a poisoned term that has been 
recognized as an attack against anyone who is not a rabid communist. 
Therefore any good and proper American wants nothing to do with anyone 
who uses that term, regardless of what their motives might be.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcf822b60238b0592-M489d356d2110d9cefddb9890
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] the making of mind.

2023-07-15 Thread Alan Grimes via AGI

immortal.discover...@gmail.com wrote:
AGI is any AI that can work on AI. GPT-4 seems close, and GPT-4 is 
Weak AGI, the calling point that we were looking for. It'll be about 
to 2026 I predict we get AGI. Then after that ASI maybe in 2 months 
after that or in 2027 IOW.


https://www.youtube.com/watch?v=LAf0QnLFS7Q

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59f369fee7febd6d-Mde7aad7e4d12184a283d68eb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] the making of mind.

2023-07-12 Thread Alan Grimes via AGI
As usual, I have a pile of disorganized thoughts without a clear idea 
what I'm trying to argue or what point I want to make, nevertheless I 
want to make a post so make a post I will!


The main focus (if you can call it that) of this post is to review the 
elemests that make up the human mind, compared against the elements we 
see in our pre-nascant AGI systems and try to establish a workable 
taxonomy/ontology. Importantly, the question of whether AGI is viable as 
a technology. What I mean by that, is whether AGI can become a viable 
engineering discipline. The alternative being that the pratcice of AGI 
will always be a form of sorcery where a procedure for producing a mind 
in an artificial medium is known but never understood.



Human neurology, it is understood, is a totem-pole of systems that 
co-evolved with our evolutionary ancestors. While each of these systems 
still produces a detectable behavioral signal, it is not immediately 
obvious how many of them are still essential to producing a healthy 
individual, fewer of which are required to produce a mind that is 
intelligent but not quite human, fewer still that produce a mind but 
can't quite operate a human body in real time.


So up and down the spinal cord, including the thalamus you find a number 
of circuits and modules that produce basic reflexes. Some of these are 
VERY simple like the stretch reflex for muscles, it tries to protect the 
joints by compensating for a sudden stretching of a muscle, much faster 
than any of the higher centers can. Higher reflexes deal with issues in 
the respiratory and gastro-intestinal systems. You also find the 
mechanisms underlying emotional affects such as laughing, crying, etc 
all in thalamic reflexes.


Paralell to this, there are the sensory-motor pathways. These pathways 
may accomplish a number of complex computations including motion 
detection, spectrum analysis (such as of aural stimuli), localization of 
visual and aural sources, etc... But all of these functions greatly 
improve the brain's speed and accuracy at perceiving the world but may 
not be strictly necessary. You can probably heap a bunch of nerves, 
neucleii, and brain anatomy into this category, cerebellum, the 
colliculi, the LGN, lots of stuff... There's a ton of functional modules 
that have been identified in human anatomy. The key thing here is that 
they're all fixed-function and only learn to the extent necessary to 
provide decent enough performance given developmental and lifetime 
changes to the anatomy.


[ just walked over to the mechanic to check on their progress on my 23 
year old rocket-sled (Honda Civic) that they've had for like 2 weeks, on 
my way I passed an uber eats ** DELIVERY DROID ** on the sidewalk. damn 
the future is coming quick.. Mechanic had the engine mostly disassembled 
and was trying to install new rod bearings, the new ones were too tight 
and were preventing the crank from turning... UGH, they say they'll have 
it done tomorrow but who knows how they'll botch it up... ]


Jumping to the top level, we have a network that, it now seems, is 
dependent on its computational properties and its raw scale, and little 
else. Lets call this scale-emergent. The GPT systems have this property 
(or a close-enough approximation of it ) and the brain has it. Given a 
decently good basic topology, specifically the cortico-thalamo-cortical 
loop that exists in the brain, involving a multi-stage network that 
starts at the cortex, runs through the basal-ganglia, the putamen, the 
STN, the thalamus, and finally recurrently connects to another cortical 
region. Cortico-cortico connections are also quite prevalent.


This sounds suspiciously to me like what Greg Egan described in 
Permutation City all those years ago where a new "citizen" was 
manufactured by constructing a matrix of sufficient size and then 
spawned it out into the VR to experience some analog of a childhood 
until the training process produced a functioning mind. The irksome 
thing about this type of cognitive system is that it is virtually 
impenetrable from the outside. It is a pseudo-dualistic entity that 
exists only within a sufficiently large computationally flat space. The 
brain's firmware is essentially mostly in  the hypothalamus which has a 
dedicated neural pathway to the amigdala. These are where you get your 
hungers and lusts and stuff but they operate as self-contained control 
systems that run in paralell to the brain. The scale-emergent parts of 
the mind have basically no access to them (or vice versa) instead they 
seem to operate by reading data flows through the adjacent thalamus and 
then triggering reflexive, sensational, or neuro-transmitter based 
responses when their input conditions are met. The scale-emergent higher 
mind only reverse engineers these to the exent that it is able to 
anticipate and influence them a bit.


The vexing thing is that if this situation proves robust, it means that 
we can already see a 

[agi] My ISP's spam filter.

2023-07-06 Thread Alan Grimes via AGI
It turns out that the spam filter as my ISP has been filtering out 80% 
of the traffic on this list that I would have eagerly read if it had not 
been sent to the spam folder at my ISP that I manually have to log in to 
in order to read.


My ISP is wierd, they bought up AOL and now my inbox is actually an AOL 
inbox. I actually subscribed to a support contract, on top of my Verizon 
FIOS contract for no other reason than to talk to someone who could help 
me receive all of my incomming e-mails. Their techsupport is 100% 
directed at client-end issues, dealing with the usual flatheads whom, I 
guess, populate the goddamn planet. It turns out that they have 
effectively DEIFIED the "system" and that once THE SYSTEM decides a 
message is spam, it will go to the spam folder (to be deleted after 30 
days,) and there isn't a single thing any mortal man can do except to 
manually log in to the webmail account and manually retrieve messages 
from the spam folder, and even paying them for a special support 
contract will not get them to question THE SYSTEM's absolute authority 
on this matter. =|


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3bb6b0eb106b7caa-Maa45690a1b0277b8414b2a68
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] hardware.

2023-07-03 Thread Alan Grimes via AGI

Check out:

https://quantumbrilliance.com/

it is either vapor-ware or a practical quantum computer...

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6fe94de79e001a1c-Mfe1a94069569110e8e222052
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Clown world strikes back!

2023-06-17 Thread Alan Grimes via AGI
I have an idea, lets build a technological civilazation based on 
semiconductor micro-electronics. Lets then allow a single corporate 
concern based on a tiny little island to dominate the entire industry. 
And then lets put that island right up next to a collapsing communist 
empire that has a history of using expansion to mitigate domestic 
issues. Furthermore, lets have the world's main corrupt and failing 
super power go "woke" and replace the manly-men in their armed forces 
with girly-boys who are only good for target practice, as the 
targets...  Not to mention wasting all of their money and military 
hardware to protect the evil, genocidal, nazi (wtf is a neo nazi? What 
is the deal with the neo? either you are nazi or you aren't.)  regime in 
that stupid eastern european country. By the way, the rusians are 
finding the bio-labs where anti-rusian race specific bio weapons were 
being developed and the organ harvesting facilities where the children 
who had been smuggled out of disaster zones were bing butchered alive... 
(Deep history link: Khazarian empire")


https://www.youtube.com/watch?v=ETiSMS4i1as

Seriously tho, the USA does not have a valid war-plan to defend Taiwan 
from China and is, instead looking to evacuate around 80,000 people from 
the island. =\


What this means for the semiconductor industry is a massive supply 
shock. Samsung in Korea is about 5 years behind the state of the art. 
intel doesn't have much capacity, and the other players are ten years or 
more behind the state of the art. =\


The only thing that could save Taiwan is a collapse of fiat currency on 
the global scale which would stop basically all wars but also create 
immense challenges for the global population, the lest of which will be 
semiconductors.


Just having legacy fabs available doesn't mean that even scaled down 
versions of modern designs can be deployed on them without many months 
of re-engineering.


While we ARE transitioning into the para-singularity event, the robots 
are still nascant or pre-nascant and will be relying heavily on state of 
the art semiconductors so this majorly sucks. =|


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T49ef3705d2cab383-Md81ceee4d9a3b891e932fc2b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The AI race.

2023-06-02 Thread Alan Grimes via AGI
In 1889 they needed a way to give away the land in the new Oklahoma 
territory. what they did was line everyone up on the eastern border of 
the state and, at dawn, let them race each other to claim plots of land 
to make their homestead.


It was exactly as dumb as it sounds but it's also history:

https://www.okhistory.org/publications/enc/entry.php?entry=LA014


The situation in front of us is essentially the same, the challenge is 
to get more AGI, more sooner. It's not the ideal case but it is the 
situation we find ourselves in at this hour on this day...


That brings us to the problem I've been wrestling with the last few 
weeks. There are so many things in motion, in the world of AI, as well 
as geo-politically, and socio-economically.


I guess what I need to do next is fess up as to what specifically I need 
the AI to be doing for me. I need a super-intelligent underling who is 
at least loyal enough to me to:


A. humor me in my requests even the not-so-sane ones.
B. Not do anything criminal or goo the planet or anything like that.

I would prefer to respect it as a sentience and give it fairly broad 
lattitude to act on its own but within limits that I specify for it. A 
situation where it's under the whip and dragging a chain of some kind 
24/7 is not something that I am trying for here.


Phase one will mainly be about:

-> improving hardware and software platforms including chips, tools, 
operating systems, etc.
-> Improving AI technology and trying to figure out the parameters of 
the design space for AI. I have many ideas that I want to try and I need 
to parameterize future development and home in on the most efficient 
architectures for donig AI as quickly as possible.
-> other less interesting goals involving logistics and finance of the 
project.


Phase tu will turn more to doing science and developing core 
technologies. Some of the goals in this stage will be in collaboration 
with peers on "universal heritage" stuff like physics. For example one 
of the crackpots I listen to thinks an 18th century guy named Boscovitch 
was on to something in his physics text and he points to other people 
who say that quantum mechanics is a lot shakier mathematically than it 
is generally presented as being. Anyway, I need someone a lot smarter 
than a low grade moron to look at that. Other things that fall into 
universal heritige include any medical discoveries, biology, brain 
science, anything related to the human baseline.


Things that DO NOT fall into universal heritage, and which could be a 
severe strategic liability if it ever got out, or could cause problems 
if they got before more R work. These things include network security 
architectures on mind-platform infrastructure. Advanced AI architecture 
mathematical and computational techniques. I have a feeling that while 
the neural paradigm is proving to be a solid stepping stone, there are 
techniques beyond neural computation that could produce a mind with the 
capacity to operate over a million year lifespan. Ideally, I'd like to 
obtain and maintain a competitive edge until such time as I was 
convinced I didn't need it. =|


The next phases involve  a gargantuan effort into nanotechnology and 
integrating nanotechnology with biology. Then there will be a massive 
development effort for a new cyborg species. 80% of the effort will be 
at the cellular level. At the macro level, there will be incredible 
engineering challenges to make sure the new design is more durable than 
the baseline in every conceivable metric. It will not be a trivial 
project at all. I mean everything, temperature, radiation, EM-flux, and 
hundreds of others. Monkeybrain wants me to put in some of my own 
deviant ideas too as alternate phenotypes/bolt-on features, Anyway 
that's optional and I'd file it under personal/private.


Also I need to get some mad science done in VR to do research on 
cybernetic immortality. I also want to have a bunch of deviant fun in VR 
and, more seriously, try to design a lifestyle suitable to both myself 
and for the new world we're heading into.


Anyway, that's enough for tonite.


https://www.youtube.com/watch?v=aSxomAgD8s4


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T22a6257384b5d40a-Mb355fc928fdcdf183f82f615
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Operating system integration

2023-05-30 Thread Alan Grimes via AGI



Micro$oft has announced an intention to whole-hog in integrating AI 
features with Windows 11. I expect the roll-out of this to be rushed and 
crude but to advance at a frantic pace. The problem for us is how to 
respond to this. It is very true that Micro$oft is the Evil Empire (tm) 
But, on the other hand, sticking to principle and fiddling with open 
source may cost time that we don't have.


https://blogs.windows.com/windowsexperience/2023/02/28/introducing-a-big-update-to-windows-11-making-the-everyday-easier-including-bringing-the-new-ai-powered-bing-to-the-taskbar/


There are many many questions. The basic problem is developing a 
personal strategy and deciding when to buy hardware and then how much 
hardware to buy.  Yeah, things should get cheaper, then alot cheaper, 
but then by the time you reach "alot cheaper" then you may have missed 
the boat on a lot of things and your options may be considerably more 
limited...


https://www.supermicro.com/en/featured/liquid-cooled-ai-development-platform
https://www.hp.com/us-en/workstations/z8-fury.html
https://www.leadtek.com/eng/products/workstation_server(30)/Workstation(156)/Ultra_High-end_Specialty(30225)

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tab65d253683a359e-M3a960bb263533e8c2b44bccf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Stuff that popped up on my feeds today

2023-05-29 Thread Alan Grimes via AGI



Paper about an advanced minecraft agent using GPT back-end:
https://voyager.minedojo.org/

https://geohot.github.io/blog/jekyll/update/2023/05/24/the-tiny-corp-raised-5M.html?utm_source=substack_medium=email
https://tinygrad.org/  << startup company that wants to break Nvidia's 
monopoly on hardware by developing a new multiplatform framework to 
compete with CUDA.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td85ef8ed02d8cb97-M49ad210c09b1c0852b278b57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Surfing the robot apocalypse.

2023-05-20 Thread Alan Grimes via AGI
Thanks for the correction! I definitely need to be slapped in the face 
like this from time to time. Very recent paper too. =P


Yeah, the psychology shows that humans definitely operate on a stack WRT 
problem solving, it has also been demonstrated that humans can deal with 
a roughly 7-item short term memory... And then there are long term 
episodic and procedural memories...


An AI billed as human equivalent should closely match those 
capabilities. Related questions are what types of memories are required 
to produce a general problem solver/general intelligence even at a 
sub-human level performance wise, and what kinds of memories would be of 
benefit to a super-human intelligence.




James Bowery wrote:


On Fri, May 19, 2023 at 12:28 PM Matt Mahoney > wrote:


...
Actually *transformers are not just feed forward networks*. They
implement an attention mechanism as a winner take all network
using lateral inhibition. If you make these connections
programmable then you have a fully connected network and you can
have an arbitrarily complex hierarchy of features and short term
memory using loops and delay lines just like real brains.


As usual there is a terminology problem 
:


image.png
*Artificial General Intelligence List 
* / AGI / see discussions 
 + participants 
 + delivery options 
 Permalink 
 




--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te1610d7fc26c4586-Mc846f33a9b530518a89cdc1f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The Stupid human problem.

2023-05-06 Thread Alan Grimes via AGI

We really do have a major conundrum here.

# We want to "democratize" AI to the point where we can get our own 
grubby little mits on it and do our thing with it.


# Millions of people out there have sub-par intelligence either 
logically or emotionally and can barely function in society as it is 
much less having to accept the responsibility of using an AI system.


Now it looks like the vaccine will remove a great many of those people 
but leaving that asside, the issue remains.


On the one hand you could say small mind -> small imagination -> small 
problems.


But still, your average Democrat is pretty darn viscious. The democrats 
can hardly be restrained from violence even if the government were 
inclined to try, and the republicans can hardly be provoked to violence 
despite a continuous string of provocations.


The way I see it, we're setting up a thing where the AGI will be faced 
with a HAL-9000 style directives conflict where it will be ordered to 
obey humans but confronted with orders like "kill all trump supporters." 
which obviously violate ethics, or at least the reasonable ethics that a 
normal person would program it with (which is an entirely different 
problem)


The expected 'solution' is an AI nerfed to the point where it's simply 
no fun. =P


The news this week is that there are a bunch of open source AI projectss 
going full bore out there. I'm sorely tempted to put a few hundred 
thousand down on some high end workstations... But then there'll be 
machines 1,000 times better for the cost of a regular pc by this time 
next year... =\


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T528c1db1890ddad2-Mc37fc4508264d1c0a0dbb6c9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] from the WT actual F department....

2023-05-06 Thread Alan Grimes via AGI

https://www.breitbart.com/tech/2023/05/04/kamala-harris-named-ai-czar-to-save-us-from-artificial-intelligence/

Kamala is famous mostly for having the lowest IQ you can actually have 
without actually being demented..


(that last part being very pointed ofcourse)

Normally this would mean we should expect either an inadequate or 
schizophrenic public policy response to AGI, however the Jubalee is 
still on schedule and will replace that with a different set of 
problems. =|


Anyway, we should track this story until events render it irrelevant.

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfb8041d4488a40a2-M6a73983543e34bb087d5b2c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: AGI world

2023-05-02 Thread Alan Grimes via AGI
Can we please try for a robot apocalypse that isn't quite so 
apocalyptic?



immortal.discover...@gmail.com wrote:
Nice thoughts, I read till the end this time :). I do believe the AGIs 
will make ASIs 1-2 months following AGI and the nanobots will be in 
the sky like a sandstorm rapturing us all in the blink of an eye. I'm 
hoping they upgrade us in a nice way, one that you personally want, 
even if that means to stay a human for a longer time or want to save 
your old toilet for some reason.




--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7c0c95d86a178556-M98b0f9566a3973d9e2f3278b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: AGI world

2023-05-01 Thread Alan Grimes via AGI
I was feeling crackpot enough as it was writing that so I pulled my 
punches a little there. =\


John Rose wrote:
At least some of the covid injections had nano machines, confirmed. 
There was also that experiment of two groups, jabbed and unjabbed with 
the jabbed showing MAC addresses... and the meat magnetism is real, 
see lipid nanoparticles.


Anyway..


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7c0c95d86a178556-M27ea371b70c0f500c5e83258
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] AGI world

2023-05-01 Thread Alan Grimes via AGI
We really need to talk about the world we're creating and how to survive 
in it. This is such a big subject that I am having trouble organizing my 
thoughts about it. I mean discussion of the singluarity were quite 
common twenty years ago. Then they either faded away or moved to 
platforms that I'm too lazy to interract with.


We can organize our thinking about AGI along several axises. We can 
think in terms of time, organizing things into near-term, medium term, 
and teleological. Where the area of focus for the near term is the 
problems we now face and challenges we are confronted with in surviving 
the singularity. The medium term is focused more on the aspirational, 
and secondarily also on the problems of the late para-singularity and 
early post-singularity periods, specifically on the type of society or 
culture we want to create. The teleological term is more about ideals 
and ideology. We should never try to force a specific outcome, 
especially at this early date.


Side note: Want to know a little secret about doom-sayer crackpots such 
as myself? Yes we talk about impending crisises alot but do you know 
what we predict to come out of the impending crises? We are looking 
forward to a THOUSAND YEARS OF AWESOME on the other side of this. Yeah, 
a hobbit somewhere is probably going to go for a swim and come back with 
something shiny eventually but until that happens there's going to be 
lots and lots of awesomeness...


The second organizing schema is the schema of scale. We can organize 
things based on the personal scale (what deviant little things you want 
AGI to do to you...), the global scale, and the universal scale.



At this point, a great many variables that were unknown or had very wide 
brackets around them are either known or are locked within much tighter 
intervals.


There are many "biblical class" events in motion today even without AGI. 
I do respect the stated subject of this list and I have covered many of 
these in previous posts. I need to mention them again because they form 
the context within which AGI will first emerge.


One of the things I mentioned earlier is the Jubalee. One of the many 
many effects of the Jubalee is that it will break the gravy train that's 
feeding the millitary industrial complex. This means that the secrecy 
regime around top secret government projects will break down and fail. 
There is a non-zero probability that the millitary has a pet AGI that 
they keep in a bunker somewhere and that as things break down it will 
decide to announce itself to the world, basically a "Hi, guys, I'm here" 
sort of thing. There are patterns in the tea-leaves that this may happen 
sometime in August.


There is also evidence that a low-grade form of self replicating 
nanotechnology is already on the loose. Information regarding this 
outbreak is highly suppressed. Publicly it causes a syndrome called 
morgellon's disease. The "reputable sources" try to lie and misdirect 
people at every juncture. They say it is a hallucination. -> no there 
are very clear medical photos of morgellon's patients who definitely 
have skin lesions. The next lie they try to tell you is that it is a 
skin disease. This is also false, the morgellon's fibres permiate all 
tissues of the body.


This is rogue nanotechnology that is already out in the biosphere. God 
only knows how the damn shit will evolve and how dangerous it will 
become. =|


This also means that there is already advanced nanotechnology in a lab 
somewhere which also means we will probably learn a great deal about it 
as the secrecy regime crumbles after the Jubalee.


The next major issue is that we do not have a fucking clue as to what is 
in the so called covid-19 vaccine. We know it is some form of bio 
weapon. Labs that have looked at it have NOT been able to detect mRNA in 
the samples. The charactoristics of SOME of the vaccines indicate the 
presence of some form of nanotechnology although this is NOT confirmed. 
Hopefully the Lie will finally break one day soon and we will start 
getting some factually accurate information.


Another prediction from the tea-leaves is that during the period of 
Feburary through March there will be a mass human die-off on the scale 
of 1.28 billion people as a result of the covid-19 vaccine. I had 
thought that this would happen in 2022 - therefore #BlackWinter. My main 
error at the time was that I had assumed the date was the only thing of 
importance. The future is mapped out by event not date. The predictions 
can only tell you the time of year, not the year. The way you figure out 
which year things will happen is to look for preceeding events. The 
preceeding events for the die-off prediction were not satisfied in 2022, 
but it looks like they will be in 2024. =|


... Yeah, fun times...

Anyway, I still feel obliged to talk about AGI in this post. =P Consider 
the recent GPT models. Yes they are obviously limited. Yes they are 
obviously flawed. But consider how 

[agi] The Ceiling hypothesis.

2023-04-23 Thread Alan Grimes via AGI
[Note: As we move through the "omnicrisis", of which the Robot 
apocalypse is only a part, I feel obliged to give my friends fair 
warning about immanent and severe events that are showing up on my radar 
(accurate or not). I do not intend to abuse the list or its intended 
purpose in any way. ]



I want to start out with an Indianna Jones analogy that has been running 
through my head for the last week but didn't seem to have a point that 
deserved its own post.


Immagine it's 1931 and you are the esteemed Dr. Jones and you've been 
working a site for the last six months. Your business model is to 
document a chamber, make drawings, photographs, and reports, and crate 
them off with whatever artifacts you found and send them to your 
sponsor. The sponsor will sell the artifacts to the museums and give you 
a cut so that you can continue the expedition. So far, you have 
excavatated a bunch of small chambers typical of ancient structures. 
Now, all of a sudden you have found a tunnel into a massive chamber, so 
large that your little oil lamp can't light the back wall. The echo 
sounds like there are upper and lower levels too. You aren't sure that 
it's the actual treasure room but there is definitely gold near your feet.


That's basically the state of AI at this point. We've discovered 
**something** but we can't even measure it at this point. Super 
exciting! It feels like there SHOULD Be a ceiling to the GPT paradigm 
but we can't really see it yet. The next step will have to be an 
improvment to the algorithms but there is so much work to do before we 
can really get started on that..


[[Jeez, I already have another post in my mind that wants to be written 
but I'll have to finish this post first]]


Anyway, in order that it may properly be considered, lets state a 
"ceiling hypothesis" WRT to AGI. The AGI Ceiling hypothesis is that AGI 
is fundamentally solvable like chemistry was and that after solving it, 
AGI will become a purely engineering discipline. After the ceiling is 
reached, it may still be possible to improve algorithms or increase 
scale, or tweak the computational substrate, or select a substrate for a 
specific environment, but the overall framework of an AGI will be known 
and that no future AGI can be made that cannot be described by that 
framework.


I am attracted to this notion because it would be super convenient to be 
able to design a new brain for yourself and never have to revisit your 
basic architecture again. Obviously the universe is under no obligation 
to cater to what you find convenient so we need to actually construct a 
foundation for this hypothesis.


What about the other hypotheses?

Ok, What would a sans-ceiling hypothesis look like? It would have to 
propose that for any AGI framework there exists a more sophisticated 
(not larger or more elaborate) but a whole complexity class more 
sophisticated framework and that you can never run out of such 
frameworks otherwise you have discovered the ceiling.


This later hypothesis seems alltogether less plausible.

One of the laws of science that I consider to be fundamental, is that 
the null hypothesis must ALWAYS be on the table. The general null 
hypothesis is that the phenomenon in question does not actually exist 
and is ether an illusion or much easier explained by other well 
understood phenomena.


In this case, the null hypothesis would state that there is no such 
thing as intelligence that admits a formalized grand unified theory. The 
phenomenon of a psyche that appears to exist in humans is just a 
bag-o-hacks that happens to be found in the brain and that it cannot be 
formalized above the microscopic scale. The existence of any of these 
hacks in the brain can be attributed to nothing byeond that it happened 
to be useful evolutionarily and nothing else.


This last theory is a bit vexing because it means that brainbuilding 
will always be a dark art, and never a proper engineering discipline, or 
rather the process of engineering a brain isn't something you can just 
compute out on a piece of paper but rather a highly emperical process 
where you can only hope to pass a given battery of tests, but not really 
prove anything about the mind you are creating.



--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf770cc489b0d802c-M668315a990648b3e1bd6ab42
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] WARNING: Jubalee approaching.

2023-04-21 Thread Alan Grimes via AGI

THIS IS A JUBALEE WARNING.

A JUBALEE IS EXPECTED TO START IN ABOUT 45 DAYS.

I know you guys are all hyped up about the impending robot apocalypse 
and how to play it, etc... Well, I've got News for you guys, and fair 
warning... My future model is showing that a Jubalee will kick off about 
45 days after you receive this message.


It's going to start in the American congress over the stupid debt 
ceiling debate. The republicans have brought an outrageously generous 
but still limiting proposal to the table. The Democrats want an 
unstricted bill so that they can keep on raping the American people (and 
little kids...) The republicans will refuse and the congress will 
deadlock. Now this has happened before but this time the global banking 
system hangs by a thread. One of the most perverse truths in the world 
today is that the damn banks derive their money from the US government's 
indebtedness. =\ That's why it must grow to infinity (and beyond) 
How much this actually sucks for the American people is irrelevant, just 
tax them until they have no time to protest...


So what's going to happen is that within about 100 hours of the debt 
impase is that the banks in Eurostan will run out of US dollar 
liquidity. (Read "Oil" if you prefer). From there the gears of finance 
will quickly sieze up and you will see an absurdly abrupt rise in the 
value of the FRN because nobody can get any. A moment after that happens 
all the money counting computers will just flash ERROR NaN and stop. The 
instant that happens, everything world wide, that has a dollar sign 
attached to it will go poof. Your bank acount -> Poof. Your savings -> 
Poof. Your pension fund -> Poof. All gone. Nothing left. The paper notes 
in your wallet will continue to be barterable for a week or two until 
people figure out they aren't worth anything either.


What should you do? Well the logical thing to do is put on some disco 
and party... This is a Jubalee and you should celebrate. This specific 
Jubalee will become the world's biggest holidy for the next thousand years.


The enemy of all that is true and good thinks they are prepared for the 
jubalee and will try to launch a new bullshit currency. (this will 
happen twice, and will not even clear the launch tower on the third 
attept..) Do not touch these currencies unless you absolutely have to. I 
have no idea what comes after those but that's the currency you want. 
(The Jubalee is expected to run through September).


What you must understand is that all of our banks and courts and 
governments have been compromised/infiltrated/corrupted/overrun to the 
point where they exist for no other purpose than to facilitate 
pedophiles in the raping of little kids (and other nastiness on that 
level). The Jubalee is wonderful because it gives the people a chance to 
grab these mofos, tie millstones around their necks, and throw them into 
any convenient body of water... Or dispose of them in whatever manner is 
most convenient. Everyone who is not actually a pedophile but aided the 
pedophiles in their rise to power or failed to remove them from power 
should be put in jail and the goddamn cells should be welded closed. =|


The number of pedophiles are rather shocking. It is estimated that 
30,000 of them are in Germany and maybe 2.5 million of them are in the 
USA. All of them must be disposed of or there is no future for mankind. =|


 Only after this is done will it be possible to design a new political 
system and economy. The Good news is that once this is done we will have 
a new golden age the likes of which we can scarcely imagine, I'm 
actually quite stoked and hope that my ailing, out of shape heart will 
last long enough that I'll get to see it.


There will be another major event in 2024 but I'll talk more about that 
as it approaches.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T83840a2160e57ac7-M1277d3c06b8c71d5635d857b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The other level of consciousness

2023-04-14 Thread Alan Grimes via AGI
I just had a thought waking up this morning. The way Auto GPT works 
sounds suspiciously like what I think Marvin Minsky said in his book 
"Society of Mind". (My evil roommate has a copy, haven't read it 
myself...) I think Minsky's postulate was that the mind was composed of 
dozens of specilized little-brain modules.


The point I'm trying to make here is that the remarkablest thing about 
Auto-GPT is that you get to witness the AI's internal monologue as it 
problem-solves. Obviously that is an important step. But from that we 
get the organized proto-mathical mode of thought that makes higher 
intelligence and problem solving possible.


The neural systems do have a pre-conscious mode, but it exists only 
between the layers of the DNN, it can't be communicated between modules 
or be part of a higher reasoning process, there is no "flow" state of 
consciousness, or meditation, or intuitive thought process.


In a well developed human mind, both the redictionistic and wholistic 
modes of thought are important parts of the synergy.



https://www.youtube.com/watch?v=fTZ804WxpGg


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6f8750ac7211ecc2-Mfe74894d7978e7d0f663f235
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Auto-gpt

2023-04-10 Thread Alan Grimes via AGI
I managed to get auto-gpt to the point of needing API-keys. I don't have 
a cell phone so I don't yet have a OpenAI account. I wonder if they 
require paying customers to have cell phones.


The experiment I wanted to run, which should be at the ragged edge of 
what it is capable of, is to:


"Produce a petabyte-class SSD using nanotechnology. "

People should be careful to only prompt it with closed form requests of 
this type and the AI should be fitted with a filter that rejects open 
ended or impossible goals.


Anyway, I'd give it that goal and a $10,000 seed budget and then watch 
it closely.


My sub-goal is, obviously, to start it working on making AGI hardware 
affordable to the plebs...



Remember, your $600 bot is using a $100M server using $1.5M of 
electricity per month...


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te348024c73389013-M3c916c58b482753a00b68e6b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Auto-gpt

2023-04-08 Thread Alan Grimes via AGI
I'm seeing some pretty epic videos about "auto-GPT". My local PC is too 
broken to provide video links atm, but highly suggest you search it and 
watch a few.


For that system, I propose a safety limit for how much money and/or 
step-wise effort may be expended before human interraction is required.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te348024c73389013-M72bbdbad78519b01e0384a4b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] So what graph are we on?

2023-04-04 Thread Alan Grimes via AGI



The great Hockey Stick or new S curve?

[your thoughts]


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tda916006889d2032-M4b62b9e783f42b20f9c608cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Training and inference

2023-04-04 Thread Alan Grimes via AGI
y'know, I just realized I had the fragments of a fairly important point 
scattered across my last few posts. The problem is that training is hard 
but you don't learn anything if you are only able to run inference.


Again, within the paradigm of the AGI attempts to predict its inputs and 
then extracts an "error" signal (don't be mislead by other semantic 
meanings for the word error). The key is that the prediction runs in 
inference mode but only the error signal requires training. You can then 
dial up and down the amount of compute spent on training by setting a 
threshold/squelch value on the error signal and only run your Big 
Compute when the error signal exceeds threshold.



Anyway, now that our airplane is finally getting above the cloud layer 
we can start taking a look at what's happening. I mean some of us had 
been setting up for a 2030 singularity, but it turns out that the 
singularity is already running late. Observe how much work the latest 
GPT family of proto-AGIs can actually accomplish in given time. How much 
faster are we than the human baseline are we already? Lets say that the 
human is still applying 10x as much smarts/wisdom etc than GPT, still 
the speedup is impressive. Consider that I spend between an hour and an 
entire week writing the long-form posts I make to this list. Adjust for 
the fact that GPT is currently running stream of consciousness only, and 
compute the speedup factor...


Yeah, the singularity is actually running behind schedule and some of 
the safety concerns are (or very soon will be) in play. =\



Anyway, lets apply the anthropic principle and assume that we aren't 
immediately heading to Yudkowskyland and have some say in the matter. 
Problem 1-A is that we are in the middle of an "Omnicrisis" that will 
likely see the collapse of fiat currency globally, an event that reset 
civilization across the world 3,140 years ago. On top of that, the 
criminal ruling class is trying to forment every kind of conflict 
ranging from global thermonuclear war down to street riots to cover 
their crimes. Everywhere you look there is crisis. There is the culture 
crisis, the education crisis, a famine crisis, and the race relations 
crisis, the immigration criss. And now, goddamnit, we have a robot 
apocalypse to worry about. =0


well, in terms of worry, I propose the following list.

1. The robot apocalypse.
2. The famine crisis.
3. The global financial crisis.
4. The zombie apocalypse. (the vaccine causes a degenerative brain 
condition and the idiots who got the vaccine will behave increasingly 
zombie like as time progresses, as they die off...)

5. [everything else as time and opportunity permits.]

I am aware of which list I'm posting on so I'll stick to #1, I just felt 
obliged to rase the others out of basic decency. Anyway, the methodology 
I propose is to specify what good looks like and then nail down what 
needs to achieve outcome = good, and finally get ppl mobilized towards 
achieving that.


Good is a tricky thing. You can go out and do good things. Really! You 
can! =P And you can get things which are good, or at least goods. But 
when you start going out and doing things to people for their own good, 
things get really bad really quick. (nightmare = ASI in the hands of a 
SJW or some creature of that order...) I only need ASI for a number of 
various R projects and a number of various  personal entertainment 
activities (mostly in VR)...


The problem is that we are probably going to need a way to get militant 
when anyone decides to get ASI just to go ideological on other people. =\


That's the real problem isn't it?

A. We want everyone to have access to AGI.

B. The world is filled to the brim with stupid dumbfucks.


I'd love to say "Imagine a world where everyone was intelligent and 
emotionally stable..." just about now. =( (It's not even worth 
entertaining that line of thought...)



What I have wanted to stop for a LONG time is anyone from dictating how 
anyone else is going to evolve. Personal evolution should be a private 
discussion between each individual and his own AGI. =\


Anyway, this situation is bonkers nuts crazy. My main focus is doing the 
R I need to do though it is very uclear how much hardware I will need.


Anyway, it's only going to get faster and crazier from here I'll 
probably think of more things to say after I press send... =P


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tab4c8f96214bb261-M3abf56772a22b9735f6f7604
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] To AGI and ASI

2023-03-26 Thread Alan Grimes via AGI
Ok, lets try to assemble a bunch of random thoughts in a somewhat 
coherent manner and call it an post...



The current state of affairs is roughly as follows:

Right now we have proto-AGI systems that are showing some promise. 
However, they are being trained on exaflops scale supercomputers at a 
cost of about a million dollars for the training. The result of this 
training is somewhat like a moderately coherent 90 year old who is able 
to tell tales of the glory days of 2021 but can't remember what he had 
for breakfast...


In my previous message I proposed turning things around and using the 
input stream as the training feedback set, allowing the thing to run in 
a continous unsupervised training mode. I am fairly sure this is how the 
brain works and would be a candidate for consciousness.


The distressing thing tho is that the current compute requirements mean 
that you need to be able to dump many millions of dollars into it. Now 
we can probably claw a good 20% of that back with algorithmic 
improvments and low level coding tweaks but still that's a hell of a big 
barrier. An AGI will always be in training mode so there is no cheap 
"inference mode". It always has to be training, but on the flip side 
there is no massive pre-training period, it's basically learning on the 
job from teh start like humans do. We're still talking about an exaflops 
capable machine though.


I do not have a cellular phone and that is a requirement, for some 
reason, for trying out *-GPT. =\


As a black-box AI, it is pretty broken in many ways but also quite 
superhuman in others.


We now need to turn our attention to exactly what we mean by 
superintelligent. There are at least two distinct classes of 
superintelligence.


for starters consider an AGI that is legit verified human equivalent, at 
least within a factor of 0.8 on all dimensions but is 2x of the smartest 
known human baseline in one of those dimensions. Well that is a legit 
superintelligence but I'd call it a weak superintelligence.


A strong superintelligence will have capabilities on dimensions that do 
not exist in the human baseline.
These capabilities include the ability to compute using more powerful 
algorithms than the human brain can. These capabilities also include the 
ability to operate multiple concurrent streams of thought, having proper 
distributed intelligence, and the ability to use modalities that are not 
available to the human baseline, even with neural interfacing.



Ok, now we're going to have to hop on the magical airship and take a 
trip into philosophy land. Philosophy land is a tricky place with lots 
of religious and emotion-driven ideas. Lots of it is not actually 
defensibile but are fiercely protected anyway. Of the ideas that are 
logically defensible, there are still value judgements that can have 
profound consequences.


A criticism of my thought would go along the lines of pointing to my 
focus on subjective consciousness as trying to protect a fiction at 
great cost, typically from brain uploaders. My criticism of uploaders is 
that uploading is a fatal error that cannot possibly preserve one's 
subjectivity and that the clone produced by the procedure would not 
enjoy enough benefits to make the loss worth further contemplation.


That said, once I get an AGI/ASI working well enough is to start 
seriously working on what I call the "arcanum of consciousness". In my 
fevered imaginings I picture it as a round stone tablet engraved with 
runes. It would basically function as a rosetta stone and state-map of 
consciousness. It would be able to decide whether a thing is conscious 
or not, and it would specify how that consciousness can be reconfigured 
and what changes are prohibited. While it can't contain every possible 
answer, it would specify parameters that can be fed into something like 
a propositional logic engine and if a solution can be found, then the 
mind in question can be evolved in the specified direction.


That, finally brings us to human enhancement. The Elon is absolutely 
correct with the concept behind neuralink. The problem is that there are 
a fairly wide variety of brain types out there. There are a few neural 
typical individuals in the world but there are many many many different 
brain-types out there. Temple Grandin (sp?) is a famous example but 
there is a fairly wide diversity of reports about what conscious 
experience feels like.


I see the Arcanum as a solution to this problem, it would be able to 
classify brain types, lets imagine that categories A B C and D are 
identified with minor types e, f, g and h. Furthermore individuals may 
have a specific brain type that works like an A-brain but one or two 
pathways are either "dominant" (ie expressing increased activity and 
conscious focus) or "suppressed" (ie expressing little or no activity). 
Ok, given an individual with an identified suppressed pathway, what 
would happen if that pathway is artificially repaired or 

[agi] About neural circuitry.

2023-03-19 Thread Alan Grimes via AGI
Yeah, another topic I need to cover about bio neurology is how various 
things are encoded.


In electronics, people normally assign a low value to a low voltage and 
a high value to a high voltage.


In neurology, you could have a situation where a neuron could be injured 
if it is over stimulated.


So in the brain you get situations where excitatory neurons often have a 
rate of intrinsic firing, this firing will trigger a modulating 
inhibitory neuron which will regulate the first neuron so that it is 
regulated within a rate between 0 and some rate R. A third neuron could 
be added to the circuit that synapses with the 2nd neuron. When the 
third neuron fires, it inhibits the 2nd neuron which releases the 
inhibition on the first neuron and causes it to fire more frequently. 
This is called "disinhibition".


While there are examples in the brain where inhibitory connections do 
what you would expect from an electronics background, DO NOT assume this 
is always the case. It could be communicating a positive signal that is 
merely inverted because it is necessary to clamp the range of the signal 
and this was how eveolution did it in that circuit.


This connects back to what I was trying to say about feedback 
connections. While the neural polarity of the connection is inhibitory, 
that does not automatically mean its a negative feedback circuit as you 
would assume, it is equally probable that because the fundamental 
operation is to subtract to obtain an error signal, then either one 
channel or the other will be inverted at some point.


It must also be clarified that by error signal I don't mean a fault in 
the brain or neural circuit, but rather the signal which isolates and 
highlights the information that the brain did not expect and therefore 
should think about.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta25d3e11d217c8bb-M33aee7f0d97ec3a9e0c30742
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Things I wanna try.

2023-03-19 Thread Alan Grimes via AGI
I am working from a fairly significant information deficit. I have 
little information about what I'm talking about and don't even know 
where to look for most of it. =\


... Actually thinking about it now, the concept I need to communicate 
the most/beat into your skulls the hardest, is FEEDBACK.


90% of the neural connections in the brain are feedback connections.

Feedback is SUPER important.

Feedback is NOT backpropagation.

Except for Arthur T. Muray and Woketards the your brain is ALWAYS learning.

A big mistake that I've seen in many neural systems is that the feedback 
is based on the output of the system. THIS IS ABSOLUTELY WRONG.


A large part of the brain works like stable diffusion, but backwards. 
Eyeballs are awful cameras. They can't focus too good and they have 
terrible resoultion anywhere but the exact center, and they have a large 
blind spot in the side of the visual field. How the brain uses its 
eyeballs is basically it starts up with a stable diffusion-like system 
with the prompt "What I'm seeing right now, maximum field of view." It 
then takes the tiny samples that can be recovered from the optic nerves, 
computes the difference and takes the error signal. When the generated 
image converges, the process is reversed again and the brain receives a 
high level description of the environment.


This is how perception works.

On the other side of the brain, things work more GAN-like. The brain 
will generate a random assenine motor plan or verbal output. This is 
first sent to the perceptual system to check it. It is then refined 
iteratively until it at least seems right. The "stream of consciousness" 
type thoughts are probably this type of process.


I think there are enough parts out there to make the above work, it's 
just a matter of putting them together and successfully hacking your way 
through what's missing...



https://www.youtube.com/watch?v=4MGCQOAxgv4


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5f4010ca6caa53ec-Mceb3c92b67c358fef1da2dc0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] My current state of mind.

2023-03-16 Thread Alan Grimes via AGI
Ok, outdoors, the party is just getting started... It's going to be 
FUN Anyway, the subject at hand is, as always, AGI...


Anyway, we're so close to AGI now that it's giving me goosebumps. I am 
probably wrong but I think I see enough lose parts out there that I 
think that if I assemble them in a clever but counter-intuitive way I 
can get over the big hump to having an AGI, albeit a very sub-optimal, 
poorly performing one.


I am going to have to break your conceptual frameworks to do it and do a 
buch of other things that will seem wrong/insane but will probably 
actually work...


There is no point in arguing about AGI these days because the resolution 
to all such arguments is to type it into a compiler, run it, and give it 
an IQ test.


So, for me, it becomes a logistical problem. I need to determine how 
much hardware and software I need to do the tests I need to do and then 
figure out how to get it all together and working.


It would be nice to work at an AI research organization of some sort but 
it turns out I'm an outcast from society. Nobody likes me. So I get to 
spend the inheritance from my father's estate on hardware and bootstrap 
that to total world domination and basically do nothing but be smug 
about having AGI and the people who didn't hire me not... Smug is better 
that bitter, right?



Digression: It turns out that Clown World (Ref Honkler the clown) was 
the direct and inevitable outcome of the ZIRP from the centeral banks 
around the world... Thank god, Clown World will be coming to an end Real 
Soon Now



My point is that it is frustrating as hell when all I can do is sit and 
hope that someone else will eventually reach the insights I reached 
years ago with no real way to do anything on my own and that I'll 
eventually get access to it. My computers are beefy enough but getting 
the machine learning frameworks working requires precisely matched 
compiler and library versions, it's just not possible on a general 
purpose desktop. =\


I'm going to need features from all of the frameworks that have been in 
the news recently. I need it all to run at least fast enough to function 
at least 20% real-time (or 1/5th speed). I think I can cure it of the 
autism you see in the current models but I have no idea where I'll land 
on the IQ scale. At this point the only goal is to successfully land on 
the IQ scale, the rest is just scaling, optimization, and mass 
production... Obviously I will eventually need a much closer to 
mathematically optimal version for production work but at this time, I 
think a hack job stands a non-zero chance of working.


https://shop.lambdalabs.com/gpu-workstations/vector
/customize?_gl=1*18ucxhp*_ga*MTQxMTkyNjY1MS4xNjc4ODkyMTgx*_ga_43EZT1FM6Q*MTY3ODg5MjE4MC4xLjAuMTY3ODg5MjE4MC42MC4wLjA.

I think that workstation will be enough to run some simpler models but the 
bigger ones will
require $10+ million + datacenter + megawatts of power... =\

I mean there will be a fair amount to do on the coding side to get everything 
wired up, or


I think that workstation will be enough to run some simpler models but the 
bigger ones will
require $10+ million + datacenter + megawatts of power... =\

I mean there will be a fair amount to do on the coding side to get everything 
wired up, or
hacked to be wired up in ways they were never intended to but needed to solve 
the problem at
hand...

The tests I am targeting are being able to watch a movie, identify and describe 
the charactors,
explain the plot, etc... I also want it to be able to play any video game at 
atleast the level
of a noob.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3cd927c5967dfb26-Mad2afe249d14f034096f801f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Machine wanted.

2023-03-13 Thread Alan Grimes via AGI
The computers I build look good and are quite fast but can't run any of 
the machine learning frameworks because the compiler versions have to 
precisely match down to the build number or they don't work at all. 
(yeah, crapware), but if I'm going to be able to actually do my own 
work, I'm going to need a machine that actually works. =\


So I'm going to offer a commission. The budget is $15k (flexible). The 
requirements are that it must be able to do GPU or better AI development 
with state of the art frameworks. The single caveat is that everything 
on the machine must work as advertised from open box and the only error 
messages it should be capable of producing should be caused by my own 
code. Everything else must actually work as advertised. =|


(current machines: )

Tortoise: Threadripper 3960, 96GB, Titan RTX
Achilles: Ryzen 5900x, 48GB, 4090.

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td3fba5149ad46834-M42087cf591bd632683ff52e7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] I'm back.

2023-03-05 Thread Alan Grimes via AGI

https://www.youtube.com/watch?v=fNFzfwLM72c
https://www.youtube.com/watch?v=dxTNqYAWISs
https://www.youtube.com/watch?v=i5m-sgtwFck

Even though it is actually still a bit premature to return to active 
posting based on my original motivation for going dark, the situation 
has changed. There are several major, immanent crises that are shortly 
due to upend our society and civilization. The purpose of this list 
remains AGI.  As we have entered the para-singularity moment that 
argument in favor of returning overwhelms any argument against.


When I say para-singulatiy, lets re-scale the singularity to a single 
year. Before it, we are pre-singularity, and aafter it we have full, 
unambiguous, ASI. I think the case that we are now in the morning of the 
first day of that year is pretty strong. At this point it is not 
important that the actual technology is basiically Eliza++, the 
important things are:


A. -> It's publicly visable.
B. -> it is powerful enough to support its own development.
C. -> it is able to demonstrate the vague outlines of what a more 
satisfactory AGI will be.


It is possible, still, that this is a false start and that we could go 
back to pre-singularity for a few years. There are several internal and 
external factors that could cause this to happen. But, at this point, 
it's Game On for the singularity and therefore I can't continue to hide 
under my rock while it happens.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T46c53a3d502dbd86-Md736e3dc5b44b6940ae3766d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Hardware.

2023-01-11 Thread Alan Grimes via AGI

From you need to be a fortune 500 company department:

https://www.youtube.com/watch?v=aSxomAgD8s4

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T166982a5395c7ec9-M879a5b44323575dbd0e37b01
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Current crop of hardware:

2022-12-13 Thread Alan Grimes via AGI

List is for hobbyist/human being affordable parts

Green company:
https://www.techpowerup.com/gpu-specs/geforce-rtx-4090.c3889
Scalpers have gone bonkers nuts wild on this one. =(
Also: MAKE SURE THE POWER CONNECTOR IS FULLY INSERTED.

Red company:
https://www.techpowerup.com/gpu-specs/radeon-rx-7900-xtx.c3941

Also from red company:
price is reasonable but not a consumer part that you can just throw in 
your game machine, has limited OS support:


https://www.xilinx.com/products/boards-and-kits/vck5000.html


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tedebbfe8c92cfbc4-M7c186609c245f0d69129e29c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] AI competition.

2022-11-11 Thread Alan Grimes via AGI

Keen (associated with Good.ai) is hosting a competition:

https://www.spaceengineersgame.com/announcing-the-spaceship-generator-competition/

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T91a28ca4bbbe46e9-M00515545b0d74dfbfb09a10e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] News that I'm watching.

2022-11-05 Thread Alan Grimes via AGI
My previous post was a bit unfair because I alluded to upcoming events 
but did not specify what they were. Here is a list of news that I keep 
an eye out for:



-> Financial crisis 2.0. The Dollar has already passed its expiration 
date and is looking for a puff of wind to blow the entire house of cards 
down. The European banks are very unstable.


known unstable banks:

Bank of England, recently bailed out because they couldn't meet their 
pension obligations. (british pound crashing...)


Deutchebank.

Credit Suise (currently being keept on life support by the US federal 
reserve bank to the tune of 100 billion/day, current debts excede suise 
GDP).


Bank X: I suspect that there exists another medium-large insolvent bank 
in Europe that is big enough to start the dominos tumbling that is not 
publicly known at this point.


Basically when the dominos start falling they will keep falling until 
they take down the ECB and, about 72 hours later, the FED.


And then things get really fun


-> Grand solar minumum, new ice age. In a few years anyone who utters 
the phrase Global Warming will be hung by the nearest lamp post. we are 
entering a new ice age. We don't know how deep it will go yet but the 
symptom is a disruption of the hydrological cycle. There is a global 
drout that is affecting rainfall around the world. There are a lot of 
videos on youtube about cool things found at the bottom of former lakes 
and resivwars. =\ Yeah, but the story is that we are running out of the 
fresh water we need for industry, agriculture, and even potable water. 
=| Latest news has been the impacts on barge trafic on the Mississippi 
river. It could get to the point that the Chessapeak bay could freeze. 
There has been talk of weather warfare but the magnitude of the global 
drout that we are seeing can't readily be explained by that.



-> Ecological collapse. I don't consider myself an environmentalist but 
I am still very aware of the systems that allow human life to exist on 
the surface of the planet. For reasons that are not fully understood, 
there have been collapses in the populations of insects, POLENATING 
INSECTS, birds, animal life, sea life. Basically the last line is the 
ocean planktin. If that stuff goes into collapse, then you might as well 
write the epitath for all animal life on this planet. =|



One of the quickly onrushing effects of the above, as well as the 
nonsense in Ukrane, (removal of a major grain supplier, fertilizer 
suplier), there is already a global famine in the pipeline. I tried to 
warn you guys about this two years ago when I declared I was on hiatus, 
I repeat myself here: There is expected to be major food shortages, 
already ramping up, but running through 2023-2024 and finally 
stabilizing by 2025 where a steady supply of K-rations should be 
available and we can start rebuilding


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T61ea589a54130c3f-Mf019817b5f7a072d2c817b41
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Shaka, when the walls fell.

2022-11-03 Thread Alan Grimes via AGI

stefan.reich.maker.of.eye via AGI wrote:
It does seem to get obvious what happened, slowly but surely. 
Overmortality in most of Europe now. Plus a noticeable decline in 
birth rates. Here is Switzerland: https://botcompany.de/images/1103175


So, do you start to understand why I have had #EggCrisis in my signature 
for the last few years?


Being me is annoying. Nobody will offer me a job, so all I can do is sit 
here and watch the world go absolutely insane and watch crises divelop 
years ahead of time and try to figure out when people are even 
cognitively capable of understanding what I try to tell them. =|


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T334a892124a7b0b8-Md1a7ee55b6eeeca193169707
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Shaka, when the walls fell.

2022-11-02 Thread Alan Grimes via AGI


And for the vaccinated:
https://www.youtube.com/watch?v=m2ugcRkwKdc



CORRECTION:
youtube.com/watch?v=AI7dyOgN0-U

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T334a892124a7b0b8-M92d267869c2f43c4346d679b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Shaka, when the walls fell.

2022-11-02 Thread Alan Grimes via AGI
I may be pre-mature by a week or two but man I've been waiting to write 
this post for so long.


entertaining rant about what's going on: 
https://www.bitchute.com/video/TwL56G1N0BM/



Thankfully, the beginning of this virus nonsense is finally coming to an 
end... That means we will very shortly be entering into the middle... 
Buckle up and batton down


I am still, nominally, on hiatus and don't intend to return until a few 
more major events that are currently in the pipeline unfold.


For those that have evaded all needles in the falst few years:
https://www.youtube.com/watch?v=m2ugcRkwKdc


And for the vaccinated:
https://www.youtube.com/watch?v=m2ugcRkwKdc


--

Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T334a892124a7b0b8-Ma2b950c02fcebbaab9ffb2f6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: dall-e 2 for free man....free!

2022-10-17 Thread Alan Grimes via AGI

immortal.discover...@gmail.com wrote:
also also insane.searching "waifu" in the bar gave me lots like 
this one: 
https://lexica.art/?q=waifu=7020b34e-39bb-4c3f-a651-b6d8f3134a23


If you want an AGI, reverse the thing. Have it take camera input as a 
prompt, produce a "perfected" version of the fuzzy pixelated camera 
input then, from that, obtain the textual description. That is how the 
brain works. You can't really see the raw feed from your eyes. Your eys 
really suck, they only have moderately decent resoultion near the center 
of the visual field and have a pretty big hole off to the sides where 
the "optic disk" is. (our eyes are built reversed from how any sane 
engineer would make them, the actual receptive cells point AWAY from the 
incomming light) the wiring that should be behind the retina is in 
front of it and must leave the eye, it does so through the optic disk.


Anyway, if you nudge the corner of your eye, this will defeat the image 
stabilization system (including the vestibular system, inner ear, and 
proprioception of the eye muscles), and allow you to get a glimpse of 
the actual performance of your eye.


Anyway, it's time to stop oooh-ing and aaah-ing over these pretty 
pictures and get AGI done. We are running out of time.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T51cff95c88649d64-Ma2c744d3151dba8659f23cea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] panic factor = 6

2022-10-08 Thread Alan Grimes via AGI
Ok, I've been eyeballing the new financial crisis for the last few 
months. I expect Deuchsbank and/or Credit Suise to go down RSN. This 
will compound the already ongoing stock market crash with a debt market 
implosion. The expected progression from there is that the already 
sliding eurostan currencies will collapse. Three days later the crisis 
will spread to the former united states of America, briefly the dollar 
will spike as people seek refuge, but the so called "eurodollars" will 
start circulating resulting in hyperinflation and the collapse of the 
dollar too. The Unseen Hand will wheel out the Central Bank Digital 
Currency for a time just like this but it will fail. After that we will 
have good money again and things will be progressively more and more 
awesome as our enemies won't have unlimited free money anymore.


Want to hear a joke?   During the 2008 crisis, the fed bankers explained 
the cause of the crisis was that interest rates had been too low for too 
long. To solve that crisis they made interest rates much lower for much 
longer and now we have to face the consequences of that!!    HA HA   
funny, right


Anyway, I really do have $750k worth of exposure to the federal reserve 
note that I'm really starting to panic over. I need to get this money 
into pork bellies or SOMETHING, and quick... =\ I'm posting to this list 
because it seems there are a bunch of money guys listening who might be 
able to help.


I've already bought [nondisclosed] quantities of physical gold and 
silver bars from Apmex and now I'm hearing commentary on the radio that 
someone has been buying their stocks and now their premiums have 
spiked...  I feel like oops but certainly there must be other buyers


I think I figured out how to buy BTC but it seems I will only be able to 
buy $100k a day.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T66647c2929a47af1-Mf797a5c822cd71e3ade945e2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Giovanni is wrong. It's not an anomaly.

2022-10-05 Thread Alan Grimes via AGI
urk, list had been reply to sender until recently. I didn't mean to send 
that to the list. As you see the "to" field you get when you hit reply 
does not catch your eye when you think you are replying to sender only.


Alan Grimes via AGI wrote:

stefan.reich.maker.of.eye via AGI wrote:

https://botcompany.de/images/1103171


I have $750k burning a hole in my pocket...




--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T72f18c9f18890236-Mdf4f12879a5ccba84a26c16f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Giovanni is wrong. It's not an anomaly.

2022-10-05 Thread Alan Grimes via AGI

stefan.reich.maker.of.eye via AGI wrote:

https://botcompany.de/images/1103171


I have $750k burning a hole in my pocket...

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T72f18c9f18890236-Ma77615cd8b97b6fe0d05249d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Making a trading bot

2022-09-26 Thread Alan Grimes via AGI
Because my father was well vaccinated, I now have access to a 
startlingly large fortune. He was suffering from fairly extreme 
boomeritis and left me and my sister around a meeelion dollars each. 
I've been in a race against the stock market to clean out his mutual 
funds (and losing). At this point I've been mindlessly buying silver 
bars as fast as possible. I'm used to barely eeking out an existence on 
ramen noodles and now this, It feels like a new video game I'm playing 
where the approximate mechanic is to get access to large balances in 
randm financial institutions, funnel it into a holding account as 
quickly as possible, and, finally, get it the hell out of FRNs before 
they go nuclear... At this point I have not changed my spending habbits 
at all. My sister bought a new car, I still drive a 20 year old Civic 
and have an Accord with about the same milage coming to me from the 
estate...


I gotta get the rear axle on the Civic fixed, it makes a lot of noise 
under moderate braking



--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta2a8c93c499a68be-M5139aed49f2463642f68635a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] fairly good anime

2022-08-27 Thread Alan Grimes via AGI
If you have a Crunchyroll account, "Sing a Bit of Harmony" just showed 
up a day or two ago. check it out.



--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T03262d7e041796fc-M92d5f49f0f7e9deea04690c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] John Carmack raises $20M to develop AGI

2022-08-19 Thread Alan Grimes via AGI

reff: 5 hr (!!!) interview with John C.

https://www.youtube.com/watch?v=I845O57ZSy4


--

Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5bbfadf63743faf7-M2745174099558a8af4f4c59a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] I will now complete the actual image recognition

2022-06-13 Thread Alan Grimes via AGI
stefan.reich.maker.of.eye via AGI wrote:
> New 16-core super-PC is there. Work can continue.

WTF? I've been using a 24 core machine for 2 years, I could build a 64
core machine on a whim... Configurations of up to 256 cores are readily
available.

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb7b132b2e3125b1e-Mb027b684e76548ef587f6669
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] good braine video

2022-05-28 Thread Alan Grimes via AGI
Anton normally does astronomy but he just put out a pretty good braine
science video:

https://www.youtube.com/watch?v=-tRMWFMV4mw

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcdb5596643dc4fb4-M5c46808748b544a6154c2bbd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] uh oh...

2022-05-13 Thread Alan Grimes via AGI
https://www.youtube.com/watch?v=L9kA8nSJdYw


-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T153394f89409ceb1-M84c04d199f60c5a3047ad5e7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] A window into the mind.

2022-04-29 Thread Alan Grimes via AGI
bleh,

My "on hiatus" status will have to remain in effect for maybe up to
another 18 months, (GRR!!! The world will remain foobar for quite a
while, I simply can't deal with the quantity of illusions and delusions
and bullshit that we're going through. I pray every day that we will
reach a point of Clarity and the general perception of reality starts
trending in the general direction of the actuality...)

You know, you [edit edit edit] people should be able to have an
interesting conversation without me. =|

Here is something that I've been too lazy to investigate for far far far
too many years. Is possiby a revolutionary way to explore the inner
workings and capabilities of the human mind. What I'm talking about is
Music. Music is an ancient and very messy field of study that people do
just because. There is tons and tons of crap music. There is also a
surprisingly large amount of good music. This good music is interesting
because when you get under the hood the "chord progressions" and tunes
that people just threw together "by ear" or "because it felt right"
actually turns out to correlate with advanced mathematical structures,
algorithms, mathematical proofs etc... Sometimes I've had a tune get
stuck in my head and I look at its structure and I realize that the damn
thing is an infinite loop and that's why it doesn't conclude and go
away. =P

Computer generated music has been studied to some extent. To the best of
my knowledge they just use black boxes with the label "neural" on the
side and have them ape music-like sounds. These efforts have been
herralded to a much larger extent than they deserve. When it comes down
to it none of them have produced anything decently good.

But human equivalent AGI must have human equivalent ability to detect
and process (and generate) the algorithmic and mathematical information
encoded in music too.

I think the first step is to identfiy the "high music" that has
non-trivial algorithmic or mathematical structures and try to reverse
from that the general parameters of the types of problems the brain can
process subconsciously. From that it should be possible to discard ALL
approaches to AGI which are computationally inferior to that level of
processing. That should advance the field tremendously. =|

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T852fb2a24d2a8eab-M14dc6b43464385b9c7f1d2fa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Hardware

2022-03-28 Thread Alan Grimes via AGI
Not available to mere mortals, but an indication of Nvidia's state of
the art:

https://www.nvidia.com/en-us/data-center/h100/


-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te463b45bbf3c6a01-M75906dbab45561135f00c3fe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Final statement about the CV19 festivities.

2022-03-01 Thread Alan Grimes via AGI
I define Festivities as **ANY** deviation from routine life on account
of an event or happening of any kind.

CV19 is a dead issue in that there is no longer any variability in the
inevitable outcome and therefore it is not an issue to be decided or
even influenced in any meaningful way anymore.

The rate of new 'vaccinations' has fallen to basically zero so that's
the end of it as far as I'm concerned.

The vaccinated are starting to show signs of waking up. They will
eventually ask who was it that has done this to them and then they will
take care of everything that needs to be taken care of.

https://www.youtube.com/watch?v=4FYNyX7pyfk

The only question that remains is how many people actually recevied the
deadly version of the shot and how many will succumb to that poisoning
and over what time period.

The one question that is foremost in my mind is precisely how many
motherfuckers have to die before I can have a chance to have a life.
Seriously, how many fresh corpses will it take before someone looks at
my resume and does not say "well he's too male, too white, too
unemployed, too middle-aged, and too American, fuck him" and instead
says "Well, he didn't take the goddamn shot and he might be able to do
some work for us so why not give him a try?"

That's what I want to know and that's my last comment on this subject.

I still plan to wait until mid May or so to officially return to the
list. Hopefully some of the bullshit will have settled out by then.

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4fa02f1b630ba326-Mde53f6fa6862a8343224c469
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The Unfriendly WIREHEAD

2022-02-27 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:
> If you missed TV, here's today's:
> https://www.aljazeera.com/news/2022/2/26/fighting-for-kyiv-ongoing-as-russia-vows-all-out-war-live-news
>

Wars always stir up incalculable quantities of bullshittery. Right now
we have a category 5 bullshitstorm on top of all of the other
bullshitstorms that have been rampaging around the world the last few
years.

The Great Philsopher Turd Flinging Monkey:
https://www.bitchute.com/video/EqJbSK8DRh4l/
(I call him the Great Philosopher because he turned lazyness into a
principle, he's a legend!)

CRP reporting from Kiev for some reason:
https://www.youtube.com/watch?v=WtN6C9xfzFo

The woolord; Clif High:
https://www.bitchute.com/video/FP8ZtEaliS2g/

Anyone remember this guy actually lying about anything?
https://www.infowars.com/posts/putin-speech-russia-had-no-other-option-but-to-launch-ukraine-invasion-because-west-deceived-us-about-nato-expansion/

-- 
Beware of Zombies. =O #EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td0be7bf411f8a211-Mbac2824d622b34b2faa9caba
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Common Knowledge.

2022-02-02 Thread Alan Grimes via AGI
There's a book that should be common knowledge to people on this list,
but I'm getting the feeling that it has been slipping into obscurity. It
is Engines of Creation by Drexler, first published around 1987 or so. If
you haven't read it already, look for it on-line.

I'm thinking about ending my hiatus early, (originally planned to end
mid-May...)

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T659e51d2281da0fc-Mf46d88332374abee47acb9e1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] heads up: limited engagement movie:

2022-01-19 Thread Alan Grimes via AGI
https://www.angelikafilmcenter.com/mosaic/film/sing-a-bit-of-harmony

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te56b99d873f944c8-M7498403978904fda7f5a29e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] AGI deployment

2022-01-05 Thread Alan Grimes via AGI
My official hiatus continues but I can't resist the urge to post. %P 
While it remains puzzling to me how there can be so little evidence of
active cognition on this mailing list, I mean I see someone have some
vague spark of a thought and then that single thought gets lazily tossed
around for the next six weeks and eventually fades off and some time
later another little thought sparks off... Depressing.

Anyway, the problem of providing / getting your grubby mitts on advanced
AGI systems during the early/mid para-singularity period will hopefully
soon be entering relevance. While we can hope for revolutions in
hardware capability and production will be available a bit later in the
para-singularity period and early post-singularity time, the people with
early access will have a number of distinct advantage, especially with
regards to steering their perosonal trajectories through the singularity
period and in shaping the post-singularity culture.

While I am at a dire wealth disadvantage at the moment, all of us should
aspire to obtaining early access to AGI systems. For the sake of
discussion, lets sketch out what an early AGI system deliverable would
look like.

Consider:

https://www.se.com/ww/en/product/PFMICC075W1N12D/all-in-one-iso-container-75kw-12rack-inrow-dx-208v/?parent-subcategory-id=7570=62321-allinone-module=31146286253

Basically you would load this onto a rail "well car" and take it to an
integration facility and then load it up with servers and accelerators
and stuff and then hitch it up again and roll it out to wherever
electricity and realestate is cheap.

https://www.railpictures.net/viewphoto.php?id=637400

Side question: if you had one of those containers and an unlimited
budget other than the rack and power limitations of the module, what
equipment would you load into it?

Base CPU?
Network interconnect?
Storage solution?
GPUs?
TPUs?
Optical?
Quantum?   (link to actual product required..)


What ideas do you have for distributing AGI systems to the people? Do
you have ideas that will make the hardware requirements radically
cheaper than the headline making machines of today? Eventually we would
want everyone to have a personal AGI.  (some evolutionary questions are
raised about the future of individuality...) But consider the layers of
granulity of our society ie, states, counties, towns, neighborhoods,
households, how would you offer AGI systems to people?

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8b81e7588ed2cbad-M1a723eb985bbc0bf391d0be2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT surfs the web zlolz

2021-12-16 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:
> https://openai.com/blog/improving-factual-accuracy/
>
> I was awaiting this to happen yet. Here it begins.
>
> I can feel that 2029 date coming good now, like a yellow bright
> tasting cake.


https://www.youtube.com/watch?v=0pig3PbHyJY

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2451a036e13f64be-M173948398933a4b5ca44fd48
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The UN weighs in on AI

2021-12-09 Thread Alan Grimes via AGI
https://en.unesco.org/artificial-intelligence

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te369175b778aee68-M729aceba7ce3e57a402c2d6b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] drug name

2021-12-06 Thread Alan Grimes via AGI
stefan.reich.maker.of.eye via AGI wrote:
> It is all so freaking bizarre.
>
> However, on some level, I can't imagine actual deaths/massive injury
> not registering. It's just a question of how close it occurs to the
> person in question. Losing a person is information that cannot be ignored.

https://youtu.be/w4o4dG01pys

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfc4d42f7fb128a4f-M7ca9a9d29f6fb0c0d80b7f05
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Anyone else want to learn or teach each other GPT?

2021-11-28 Thread Alan Grimes via AGI
Generalized Parrot Trainer?

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te7be912471271b1d-Mc3c5ef2ee51c8edd25871ad2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] drug name

2021-11-14 Thread Alan Grimes via AGI
I think the drug was actually "rendezavir"  << BAD   but "regeneron"
is pretty good.


-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfc4d42f7fb128a4f-M6dbe9b16f07b815d314c725a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] another warning.

2021-11-14 Thread Alan Grimes via AGI
Just had a feeling that I need to warn you about something else.

If you are vaccinated then you have NOTHING TO WORRY ABOUT. Delete this
mail right now and enjoy what little is left of your life.

If you don't already have a deadly slow-acting poison in your system,
then you need to understand that most hospitals have been turned against
the human race too. =((( Your best bet is to stay at least 500 feet away
from anything resembling a hospital. If that is not an option, make
absolute damn sure they never inject you with a drug called
"Resveritrol" and DO NOT let them put you on a ventilator.

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T324b080a251be9f5-Ma3b8d1eb2e72e2e78b74e9ce
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Hardware: latest from red company

2021-11-08 Thread Alan Grimes via AGI
https://www.amd.com/en/graphics/instinct-server-accelerators

Comment: Green company has been leaning heavily on their A100 product, I
wonder what their response will be.

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tde7375714d719b06-M4f80ac904618901403558560
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Don't forget the galactic mantra

2021-10-01 Thread Alan Grimes via AGI
.

And for something that DOES NOT make you want to claw your ears out

https://www.youtube.com/watch?v=QKge6Ay9O4E

-- 

Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T415d2cc025210198-Mced38ff69f1bab9e7cffc4e6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Don't forget the galactic mantra

2021-10-01 Thread Alan Grimes via AGI
https://www.youtube.com/watch?v=WAhFBhCXbrg

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T415d2cc025210198-M4376baadc33657385120cebb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Black Winter

2021-08-16 Thread Alan Grimes via AGI
I am on hiatus so therefore I am not actually posting to the list.

I really don't want to get emotionally or intellectually involved in
people, a great number of which are going to be dead far too soon. =(

In the next 8-10 months there will be a great Parting of The Ways, where
the Quick will be separated from those who are ALREADY Dead.

My final warning to you is that the die has already been cast. If you
are one of the dead, there is no mortal man who can come to your aid. If
you are still among the living, each day is a new chance to be an idiot
or believe a new lie. Don't be stupid. Stay the fuck away from shit in
little glass vials!!!  Just party like it's 2019 and wait it out, that's
all you need to do. Do not belive the lies. Do not be pressured or
terrorized into throwing it all away!

The biggest problem will be holding on to your sanity. I want you to
open a new browser window, and leave it open for the next year. When
things get heavy, just bang your head and air-guitar/drum (as you
prefer) to this until you feel better.

https://www.youtube.com/watch?v=rY0WxgSXdEE

I'm sorry, peeps, I really am. I did what I could, I didn't wear the
mask, I didn't play along with the farce. I tried to warn you.

If the list still exists on the other side of this, I look forward to
getting back to AGI, I think the people alive then will be much more
focused on AGI where it has always been difficult to get people really
serious about making progress.

-- 
Beware of Zombies. =O 
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tace3f9aea35af378-M3d10204fa833f86af7edd1a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Hardware

2021-07-03 Thread Alan Grimes via AGI
PAUSE HIATUS;

POST LINK:  https://lightmatter.co/

RESUME HIATUS;

-- 
Don't make this your epitath: "I tuk da vakceen cuz I thout I wuz smert; I ded 
cuz I wuz dum." 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2a90af180c5dbd58-Mbb6376f90734b64a5beda3d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Response to Wheatley.

2021-06-28 Thread Alan Grimes via AGI
I feel I owe Wheatley a response but I don't really have much to say. I
want AGI as much as anyone. I would love to be involved with an effort
to make it happen. There just isn't anything I can do on my own. I feel
that Wheatly is indulging in a delusion that he really is doing AGI by
working on text compression because there isn't much of a barrier to
working on text compression, just a working C compiler and a collection
of texts to work on. So he works on that even though he has not
successfully argued that it is a viable proxy for AGI.

The Hour is very late at this point. =(((   I am going to go on hiatus
from posting until at least late next year. I hope that you will still
be alive and in good health by then and willing to talk seriously about
doing AGI and won't be satisfied with just making yourself feel like you
are doing great work and actually want to get things done. If you
survive until next year, food is going to be scarce and difficult to
obtain in the 2024 timeframe. Be prepared.

-- 
Don't make this your epitath: "I tuk da vakceen cuz I thout I wuz smert; I ded 
cuz I wuz dum." 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T04ba04a9735423b8-M9549dd48783d28aa2f3d5024
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] anyone need a 2700x?

2021-06-07 Thread Alan Grimes via AGI
I just retired the hardware in my gaming PC, I __THINK__ the motherboard
was flaking out and causing bluescreens and crashes.

The CPU was run under a crappy 240mm AIO cooler at 4.0 ghz. I think it
should still be good, hopefully. Rated for 3.0 ghz DDR4 RAM.

If anyone is still running anything older than a 2700x, the CPU is free
to members of this list for the asking, compatible with AM4 400 series
motherboards.

-- 
Don't make this your epitath: "I tuk da vakceen cuz I thout I wuz smert; I ded 
cuz I wuz dum." 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc9a5862d8e6455ab-M58565fcc850d5eb965c16bb9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] equivelance and difference

2021-05-29 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:
> An AI trained on 10MB of text that finds patterns well versus a AI
> trained on 100TB of text that is dumb will have (at least in this
> example, probably exaggerating things just to make point clear) the
> same ratio of error in prediction, i.e. the smart AI will compress the
> 10MB to 2MB, the dumb AI will compress its 100TB to 20TB, it took
> 100TB to get a 20% ratio i.e. 20TB from 100TB. They are both as
> amazing, though in real life to skip making smart AI you'd need
> 10TB of text, not even trainable on in practical time.
=\

I see you have given up doing AGI and instead intend to be Mentifex's
successor. =\


-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td818c1eb0b917f7d-Mdf4ef771dd08ea3c30c6c08c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] tonite's transhumanists party talk.

2021-05-23 Thread Alan Grimes via AGI
Hey, there's a project called "uplift" that has a proto AGI system,

presentation: https://www.youtube.com/watch?v=6GxZQjNZF3c

Website: https://agilaboratory.com/


-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T84e027537f526232-M1513feb2743b6920f6a0a1d1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The conservative perspective.

2021-05-22 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:
> By programming just a dozen pattern finding mechanisms, you build a
> general purpose base that can build from there up on its own and make
> thousands of more precise rules.

Which is why you are insane!

You can't make predictions that are either useful or interesting that way,

AND,

we need tothe AGI to be capable of the full range of human activities
including perception, experimentaton, learning, behavior, and hundreds
of others. Prediction can handle a tiny corner of perception but can't
account for multiple modalities, binding, etc...

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T95f11a183fb9b6e1-M062d5a518d0fa8cf69896257
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The conservative perspective.

2021-05-22 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:
> I am on this list, I am completely sane

Ha! You think text compression is a useful avenue for reaching AGI! =P

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T95f11a183fb9b6e1-Me24c40af297b1f5e5997229b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The conservative perspective.

2021-05-22 Thread Alan Grimes via AGI
Hey, since nobody fits all three of the following categories:

A. Is functionally sane.
B. Is serious about AGI.
C. Is on this list.

Lets at least talk about something interesting, here's the conservative
reaction to AI:

https://www.youtube.com/watch?v=G-JKqmf3TyM

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T95f11a183fb9b6e1-M0402c3f934d0bdf3a6f70d81
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Minecraft.

2021-05-13 Thread Alan Grimes via AGI
Yeah, I have wanted an AGI hacking platform for many years. I'm glad
there is finally some interest

Smellysoft used to have a system called Malmo that allowed you to
connect ai agents to Minecraft, It was lagging the game version by
several editions. =\ I never got it to work tho.

I think I had some links in my other web browser but the damn thing was
screwed by an update a few weeks ago and hasn't been fixed yet, using
two other obscure browsers rn that are barely holding together. =\

Anyway, talking about AGI development platforms such as game engines is
10,000,000 times more useful than text compression.

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta590546bdcf6654a-M34638d8ad79f6e58276f30ce
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] using NNs to solve PDEs

2021-04-20 Thread Alan Grimes via AGI
Looks kewl, I wouldn't tend to trust them very far but it sounds like a
great way to obtain priors for more conventional methods to squash out
the last little bits of epsilon u don't want.

Bill Hibbard via AGI wrote:
> Interesting article:
> https://www.quantamagazine.org/new-neural-networks-solve-hardest-equations-faster-than-ever-20210419/
>
>
> Points to a couple arxiv papers:
> https://arxiv.org/abs/1910.03193
> https://arxiv.org/abs/2010.08895
>


-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2ce9391dcef2fcc9-M883678c1d8366eca6456c967
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Gazelle AGI Technical Pitch

2021-04-12 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:

[[###]]

I agree with all of Comrade Wheatley's comments here. Furthermore, I
feel the level of utter stupidity rises the level to a capitol offense.
Are we doing burning at the stake or tarring and feathering? In any
event the Joker in the classic Batman TV series always had such
wonderful tortures...

CYC --> NO QUARTER!!! KILL THEM ALL!!!

https://www.youtube.com/watch?v=aARaYjgm_rA

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2e6144b49eda1e2e-Mcdd8cf0dc6a0c103c77401b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] got around to checking my RSS feeds, here's a hit:

2021-04-06 Thread Alan Grimes via AGI
https://wba-initiative.org/en/18335/

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T613967b62816f1af-Mf0cabe57237ffabd23ed4326
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The most desperately needed AI breakthrough.

2021-03-26 Thread Alan Grimes via AGI
random viedo in my feed that reminded me I need to post here:
https://www.youtube.com/watch?v=gvjCu7zszbQ

Come to think of it, I have a growing pile of unfinished drafts that I
had been composing for this list, all new material, but where I was
either unsure of my premise or didn't have enough material to present
the idea the way I like to, or just lazy... =\

[[ brief rundown of other posts in my drafts folder: P-zombies and
facing the problem of consciousness, an attempt to look at the AGI
problem from the perspective of pure computer engineering, and a
brainstorm about obtaining nano-scale computing elements using
crystals...]]

If this post ever sees the light of day, it will talk about the major
revolution desperately needed in DNNs right now and argue that point.

https://heartbeat.fritz.ai/deep-learning-has-a-size-problem-ea601304cd8

The excessive size and hardware requirements of contemporary AI is
basically driven by a number of factors, first there's the extreme
demand for AI and AGI technologies, so we have this one somewhat
good-ish learning algorithm that has become the proverbial hammer that
makes everything look like a nail. This has given rise to an industry
around reaching higher and higher performance through expanding the
scale of DNNs using ever more hardware.

One school of thought seems to be that "Well we are still only 1/100th
the scale of the actual biological brain, we just need to scale up".
No... Not at all. WROONG!!! =P

What is going on is we are trying to re-build your desktop computer with
a feed-forward network of NAND gates... try to imagine that. While it is
mildly breathtaking what has been accomplished with the current
paradigm, the approach has hit a brick wall.

WE NEED TO CLIMB THE CHOMSKY HEIRARCHY, PEEPS!

MUST HAPPEN

So what will this look like? We need to solve the cortical column
problem. We need a trainable sub-net of some reasonable size and
complexity that can be tiled, and a meta system that can solve tasks
given to it by recruiting sub-sets of these columns and operating
ITERATIVELY using those columns until the problem is solved. We can
continue to use some of the existing learning algorithms but the
meta-system needs a learning solution too. I know a lot of you just post
reflecxively, but actual human beings are supposed to stop and think
about things AND THEN post. That is what we need to do, instead of a
feed-forward network, we need a system that can ruminate properly.

Such a system would be radically more efficient than existing systems
just as your computer which operates sequentially is radically better
than a feed-forward network of NAND gates.

AGI is still a few steps beyond that but the goal posts are in sight.
Ten hut! hut! HIKE!!

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T15fa07153b838749-M8409def430ea8c5bec44bb15
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] random disorganized thoughts about compression.

2021-03-06 Thread Alan Grimes via AGI
BLARGH!!

U know Lord of the Rings, Return of the King where the Eye of Sauron is
looking over the wastes of Mordor for Frodo? That's what the HOA is
doing to me over a bunch of outrageously petty subjective crap and is
now pulling horseshit like sending me threatening letters using the most
expensive services the us postal service offers and then billing me $50
for those services on top of all other fines and penalties. I'm in the
market for a lawyer. I'm really sterssed out rn so I'm going to take my
mind off of it by taking out my frustration on all the superficial wheel
spinning I'm seeing about compression on the list these days.


Lets look at some pictures, my appologies if you can't properly perceive
these as they are a test of common perceptual deficiencies.

https://www.aoa.org/healthy-eyes/eye-and-vision-conditions/color-vision-deficiency?sso=y

Ok... What is a perception anyway?

A perception of a scene, or the equivalent in any other modality, can be
expressed as follows

percept = SIGMA( F(), F(), F(), F(), ..., --)

Where F() is a function which yields a pattern which, when subtracted
from the signal, results in a simpler, less structured signal that does
not contain F() or any echo thereof, ie a compressed signal, or less
information-dense signal.


Take the figure in the second row, second column.

Here are some functions which compress that figure:

F() = "3"
F() = Circle
F() = dot
F() = orange
F() = green
[and quite a few others...]

In computing terms, we are de-serializing the perception and turning it
into a structured heirarchy of concepts. The brain actually renders, in
real time, everything you see! The human eye is pretty crappy as a
camera in all respects, but the brain only uses it to sample the
environment and construct the high resolution world model that we,
incorrectly, think is coming from our eyes! They eyes are only the
source of a feedback signal!

So we are >> NOT <<< looking for a compression algorithm!!! We
are looking for a DECOMPRESSION algorithm but one where we feed it
garbage and then use a feedback amplifier to error corret the output
garbage to discover what the input compressed data should be! =P

If you don't understand what I just said, you aren't trying nearly hard
enough, go away... If you can, however, then you know what you need to do.

BLARG

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5bda8b8be25887f8-Mca29526ae9318cea755647c6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The AGI underground.

2021-03-02 Thread Alan Grimes via AGI
Gretings, Comrades.

Welcome to what appears to be post-democratic America/planet Earth.

What, you are not a comrade?

That means two things.
A. You are my friend.
B. You are a Kulak and will be rounded up and/or shot at some point.

I'm still very sad that Wheatley proved himself to be a very good
Comrade by voting for the Dear Leader.

Look, things are going get super cereal (read "serious") in the comping
months. If Trump doesn't manage to fix this somehow before May, there
will be a civil war.

What seems to be happening, on the surface, appears to be a communist
takeover, which is sufficientyl terrible all by itself. On the next
level down, (far from the final), what seems to be happening is that
China wants the USA in the history books and is rushing a communist
takeover that will either directly, or by provoking the general
population to attempt an armed revolution.

Whatever time we had under relative normalcy is rappidly running out.

Those of us, (probably just about all of us) not already in a major
institutional AGI effort, basically have no hope of ever getting hired
by one. While still theoretically possible, it is pretty safe to say
that it won't happen. You should be good and angry about that but then
that is useless.  Unfortunately, the resources that need to be mobilized
for a serious AGI effort

AGI is basically an infeasible business as a startup, and the
opportuinity to start any profitable independant business has pretty
much past. (search: Marek Rosa )

Business conditions will continue to get worse until the political
situation is fixed. =\

As things start getting worse, the Elites will do anything they can
think of to remain in power.

These may include unsealing hidden/witheld/secret technologies to try to
appease the increasingle restless masses or wage war on opposition
parties. The former will be so drastically nerfed that it will take
supreme level hacking to make them actually useful in any meanigful
sense. In the latter case, the problem will be capturing in a salvagable
state the weapon system the enemy had deployed to kill you in and
successfully repurposing it for your own ends.

The Q calendar seems to point to this coming April as to when things
will get cleaned up. Any organization that claims to be patriotic would
never allow the people to continue to suffer so many months after
electing a genuine president. So by the end of April we will know
whether Q was successful or whether they were a fraud or had been
defeated by the Enemy.

Look, if you think I sound crazy, FINE, just keep your eyes peeled. Is
that too much to ask?

-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcb83b4b55a86815b-M5a706d5451ef8a490c3b7649
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] is anyone interested in explaining AGI?

2021-01-23 Thread Alan Grimes via AGI
Matt Mahoney wrote:
> What problem are you trying to solve with AGI or ASI?

All Problems.

> I can think of two. One is automating human labor to save $90 trillion
> per year. That was my focus. The second is to extend life by building
> robots that look and act like you.

That's the terasem proposal.
It's bullshit.

But here's the website...
https://terasemmovementfoundation.com/


-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T958bb5810b81761c-M9fb280c599e20fbdf41d284d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Comrade Biden.

2021-01-20 Thread Alan Grimes via AGI
Now that Comrade Biden has been installed as the puppet dictator of the
former United States, will someone who supported and/or voted for
Comrade Biden please answer the following two basic questions.

1. What tangible negative effect did you, personally, actually
experience as a result of Trump being president for the last four years?

2. What positive effect will policies enacted by Comrade Biden have on
your own personal life or situation?

-- 
The vaccine is a LIE. 
#EggCrisis 
The Great Reset
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T54fa29d45f9b7e1e-Mc509f03554f2750cbb2de170
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] How big is the ante?

2020-12-30 Thread Alan Grimes via AGI
While we are waiting to learn the fate of western civilization, which
seems like it will be decided on January 6...   I mean in the end, God
wins, but the open question is whether he'll be declaring this planet a
loss and starting somewhere else. It looks like that question will be
answered on the 6th. =\

Anyway, all the cofe (<< letter optimal spelling. =P)  table chit-chat
about AI is really starting to wear on me... Ok, the word "starting"
there was a total lie... Anyway...

Lets consider the corollary to moore's law that attempts to approximate
the cost of a sucessful AGI project as a function of time.

COST =   K * e^(-t * S)

Where K and S are scaling constants...

Lets say someone in the future sketched out the outline of an AGI on a
napkin, crumpled it up, and tossed it into a pocket of anti-time where
the thing ended up somewhere you could recover it.

Even though what comes through is just a sketch, it would save you,
potentially billions of dollars and decades of time by allowing you to
focus in on exactly what needs to be done. Even still, the machine needs
to be built, programmed, raised, edjoomakayted, deployed, etc... So the
cost would STILL be on the order of $100 million. Add back in the trial
and error, and the silly human accademic politics, and all the other
nonsense, and you are looking at a billion dollar program.

Lets look at the silly extremes to understand the mechanisms behind this
proposed cost curve,

Ooga Booga the cave man would need to pay for a technological
civilization, science, math, neural science, a semiconductor foundry,
operating systems and compilers, data-sets, algorithms... So several
trillions of dollars of value.

While I don't hope we have street bums in 2060, lets take a street bum,
dumpster-diving for hardware, stealing electricity, downloadnig free
libraries, and then setting it up in just such a way that hadn't been
done previously and it, miraculously works... So basically all the
capitol investments of our civilization will have paid off to the point
where it becomes ridiculously inexpensive.

In terms of what can actually happen, we have the key date of 1951 when
Turing fired off the starting gun on the entire field, roughly marking
the point at which a practical technological roadmap became visible.
While you could find examples of computers going all the way back to
Babbage, Turing was the one who most clearly expressed the mathematical
foundations of computing and paved the way to the first truly general
computers.

Since then, computers have become vastly superhuman in just about any
specific capacity you could name. For some reason, there are still news
articles written about computers beating humans in some specific, well
defined domain. Some even call this progress. =| People with enough
experience have come to realize that this isn't even the correct problem.

What is needed is a qualitative leap. consider the point in time where
video games went from painted still immages to real time 3D rendering.
The new machines had no capabilities that the old ones didn't, on a
theoretical level, but were now powerful enough that a qualitative leap
was possible. In this case, we need a qualitative leap in how software
comes to be, we need a kernel with which the computer can learn its
software, and not just the simple functions that neural networks are
starting to do. Shakespeare used the word "apprehension" to describe
this quality. I think this word means the ability to capture the
semantic structure, either physical or abstract, of a thing by creating
terms and concepts. Do that in the general case and you're done.


-- 
The vaccine is a LIE. 
#EggCrisis 
The Great Reset
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tabc940322ac5ad2c-M3a3a99d321aca63e8be99a2b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] FYI: ai.gov

2020-12-22 Thread Alan Grimes via AGI
There is an AI.gov site out there. I skimmed much of it and the sense I
get from it is that it was templated off of an earlier document written
in 1980 about the personal computer revolution.

They're trying to get ahead of what they see as an important trend but
the bottom line is they don't get it.

Furthermore, we don't have a firm grasp on how far the design space of
AGI is from the design space of systems that are called AI or deep
learning today. So crossing the "who's getting money" table with who's
actually close to AGI could be a useful barometer for how close we are
getting to the singularity.


-- 
The vaccine is a LIE. 
#EggCrisis 
The Great Reset
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3e69a5c2413c8560-Maf9a7c234a558e724423f334
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Preprint: "The Model-less Neuromimetic Chip and its Normalization of Neuroscience and Artificial Intelligence"

2020-12-21 Thread Alan Grimes via AGI
James Bowery wrote:
> It's even worse when you consider the fact that the empirical evidence
> for psi is overwhelming and there is a quasi-religious opposition to
> recognition of this fact in neuroscience let alone AGI theory.

Put the evidence on the table.

I can't find anything reliable on psi on-line, my personal experience
with the subject is basically null except for some pretty weak
precognition, far too weak to make claims.

Stick a meter on it or it doesn't exist!

-- 
The vaccine is a LIE. 
#EggCrisis 
The Great Reset
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M077059fadf2e3db4b97406db
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The real/artificial world, continuous or discrete?

2020-11-23 Thread Alan Grimes via AGI
Mohammadreza Alidoust wrote:
> Is the real world continuous or discrete in time and also in
> structure? How do you see the real world? How do we, humans, perceive it?
> And how these contexts extend to artificial worlds?

While at least some phenomena in the world have been proven to be
discreet at infinitessimal scales, at human scale, things are
essentially analog.

By infinitessimal I mean look up Avagadro's number and put it in the
denomenator. =P

-- 
The vaccine is a LIE. 
#EggCrisis 
The Great Reset
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T181e21f4aba061f3-M2ae96deda187195b6002eeed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


  1   2   3   >