Synergy or win-win between my work and the project i.e. if the project
dovetails with what I am doing (or has a better approach). This would require
some overlap between the project's architecture and mine. This would also
require a clear vision and explicit 'clues' about deliverables/modules
You may be assuming flexibility in the securities and tax regulations
than actually exists now. They've tightened things up quite a bit over
the last ten years.
I don't think so. I'm pretty aware of the current conditions.
Equity and pseudo-equity (like incentive stock options -- ISOs)
I think he's just saying to
-- make a pool of N shares allocated to technical founders. Call this the
Technical Founders Pool
-- allocate M options on these shares to each technical founder, but with a
vesting condition that includes the condition that only N of the options
will ever be vested
One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular tasks. The licence might permit
use of the code for
Mark, have you looked at phantom stock plans? These offer some of the
same incentives as equity ownership without giving an actual equity
stake or options, allowing grantees the chance to benefit from
appreciation in the organization's value without the owners actually
relinquishing ownership.
provided that I
thought they weren't just going to take my code and apply some licence
which meant I could no longer use it in the future..
I suspect that I wasn't clear about this . . . . You can always take what is
truly your code and do anything you want with it . . . . The problems
start
But how do you add more contributors without a lot of very contentious
work? Think of all the hassles that you've had with just the close-knit
Novamente folk (and I don't mean to disparage them or you at all) and then
increase it by some number (further complicated by distance, difference
Mark, have you looked at phantom stock plans?
Keith,
I have not since I was unaware of them. Thank you very much for the
pointer. I will investigate. (Now this is why I spend so much time
on-line -- If only there were some almost-all-knowing being that could take
what you're trying to
On 04/06/07, Mark Waser [EMAIL PROTECTED] wrote:
One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular
but I'm not very convinced that the singularity *will* automatically happen.
{IMHO I think the nature of intelligence implies it is not amenable to
simple linear scaling - likely not even log-linear
I share that guess/semi-informed opinion; however, while that means that I
am less
One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular tasks.
Uh, I don't think you're getting this. Any
Mark waser writes:
P.S. You missed the time where Eliezer said at Ben's
AGI conference that he would sneak out the door before
warning others that the room was on fire:-)
You people making public progress toward AGI are very brave indeed! I wonder
if a time will come when the
On Jun 4, 2007, at 4:35 AM, Mark Waser wrote:
This kinds of things are pretty strictly regulated now, and
waiting until the end to contract a stake to your contributors
would be a disaster for them in terms of both their return and/or
tax liability,
If you're waiting until the end to
The difference is significant: the real return between the best and worst
can easily be 2x.
Given that this is effectively a venture capital moon-shot as opposed to a
normal savings plan type investment, a variance of 2x is not as much as it
initially seems (and we would, of course, do
On Jun 4, 2007, at 8:07 AM, Mark Waser wrote:
(Depending on your specific type of interest in a company, an
argument can be made that warrants can be more valuable than
equity.)
Warrants have the same control problems as options do -- magnified
by the fact that they are transferable.
On 6/4/07, Bob Mottram [EMAIL PROTECTED] wrote:
[...] Judging by the volume of text generated so far
on this subject I expect that anyone joining this sort of venture will
waste a lot of their mental energy determining precisely who owns what
and arguing over the details of the mechanism for
On 04/06/07, Derek Zahn [EMAIL PROTECTED] wrote:
I wonder if a time will come when the personal security of AGI researchers or
conferences will be a real concern. Stopping AGI could be a high priority
for existential-risk wingnuts.
I think this is the view put forward by Hugo De Garis. I
But you haven't answered my question. How do you test if a machine is
conscious, and is therefore (1) dangerous, and (2) deserving of human rights?
Easily, once it acts autonomously, not based on your direct given goals and
orders, when it begins acting and generating its own new goals.
Now, all we need to do is find 2 AGI designers who agree on something.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:
I think this is the view put forward by Hugo De Garis. I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build
One more bite:
Locus Solum: From the rules of logic to the logic of rules by
Jean-Yves Girard, 2000.
http://lambda-the-ultimate.org/node/1994
On 6/5/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Speaking of logical approaches to AGI... :-)
http://www.thinkartlab.com/pkl/
-
This list is
Is there space within the charity world for another one related to
intelligence but with a different focus to SIAI?
Rather than specifically funding an AGI effort or creating one in
order to bring about a specific goal state of humanity in mind, it
would be dedicated to funding a search for the
The benefits of forgetfulness: smaller search spaces mean easier recall
http://arstechnica.com/news.ars/post/20070604-the-benefits-of-forgetfulness-smaller-search-spaces-mean-faster-recall.html
j
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
Using a non-existent AGI to rate contributions... is not a realistic idea.
Ok, I'll bite. Why not?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
Mark Waser wrote:
P.S. You missed the time where Eliezer said at Ben's AGI conference
that he would sneak out the door before warning others that the room was
on fire:-)
This absolutely never happened. I absolutely do not say such things,
even as a joke, because I understand the
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
Using a non-existent AGI to rate contributions... is not a realistic
idea.
Ok, I'll bite. Why not?
It seems that you're just using the promise that there'll be a future AGI
(and so presumably credits can be assessed more objectively, which I
Hi Mark,
Your brain can be simulated on a large/fast enough von Neumann architecture.
From the behavioral perspective (which is good enough for AGI) - yes,
but that's not the whole story when it comes to human brain. In our
brains, information not only is and moves but also feels. From
my
27 matches
Mail list logo