Mark,

On 4/13/08, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>  >> I then asked if anyone in the room had a 98.6F body temperature, and
> NO ONE DID.
>
> Try this in a room with "normal" people.
>

~3/4 of the general population reaches ~98.6F sometime during the day. The
remaining 1/4 of the population have a varying assortment of symptoms
generally in the list of "hypothyroid symptoms", even though only about 1/4
of those people have any thyroid-related issues. Then look at the patients
who enter the typical doctor's practice. There, it is about 50% each way.
Then, look at the patients in a geriatric practice, where typically NONE of
the people reach 98.6F anytime during the day.

 You'll get almost the same answer.  98.6 is just the Fahrenheit value of a
> rounded Celsius value -- not an accurate gauge.
>

Wrong.  Healthy people quickly move between set points at ~97.4F, ~98.0F,
and 98.6F. However, since medical researchers aren't process control people,
they have missed the importance of this "little" detail.

 My standard temperature is 96.8 -- almost two degrees low -- and this is
> perfectly NORMAL.
>

Thereby demonstrating the obsolescence of your medical information.

NOW I understand! Simply resetting someone from 97.something temperature to
98.6F results in something like another ~20 IQ points. People usually report
that it feels like "waking up", perhaps for the first time in their entire
lives. I can hardly imagine the level of impairment that you must be working
though. NO WONDER that you didn't see the idiocy of making your snide
comments.

 Any good medical professional
>
>
 understands this.
>

Only if they have gray hair.

This all comes from an old American Thyroid Association study that was
published in JAMA to discredit "Wilson's Thyroid Syndrome" (Now Wilson's
Temperature Syndrome, which has since been largely discredited for other
reasons) that my article references. There, many healthy people had their
temperatures taken at 8:00AM, and they found three groups:
1.  People who were ~97.4F
2.  People who were ~98.6F
3.  People who were somewhere in between.

However, if you take a healthy person and plot their temperature through the
day, you find that they sleep at 97.4F, and pop up to 98.6F sometime during
the first 3 hours after waking up. In short, the ATA study was ENTIRELY
consistent with my model and observations. However, inexplicably, the
authors concluded that people don't have any set temperature, without
providing any explanation as to how they reached that conclusion.

However, YOUR temperature is REALLY anomalous and WAY outside the range of
the ATA's study, and possibly consistent with serious hypothyroidism. Have
you had your TSH tested yet? If not, then fire your present incompetent
doctor and find a board-certified endocrinologist.


>  Don't criticize others for your assumptions of what they believe.
>

Why not, when I have read the articles, tested dozens of healthy (and many
more unhealthy) people myself, and seen that in light of the observable
facts, that some conventional medical dogma absolutely MUST be wrong.

Please, please get your temperature fixed before making any more snide
postings here. I find your snide comments to be painful, and I strongly
suspect that you too will see the errors of your ways and correct them when
you finally "wake up" as discussed above.

Steve Richfield
==================

>  ----- Original Message -----
> *From:* Steve Richfield <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Sunday, April 13, 2008 4:42 PM
> *Subject:* Re: [agi] Comments from a lurker...
>
>
> Mike,
>
> On 4/12/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
> >
> >  Steve:If you've
> > got a messy real-world problem, you know little, if you have an
> > algorithm giving the solution, you know all.
> >
> > This is the bit where, like most, you skip over the nature of AGI -
> > messy real-world problems. What you're saying is: "hey if you've got a messy
> > problem, it's great, nay perfect if you have a neat solution." Contradiction
> > in terms and reality. If it's messy, there isn't a neat solution.
> >
>
> However, there are MANY interesting points in between these two extremes.
> Typically, given the best "experts" (quotes used to highlight the fact that
> claiming expertise in something that is poorly understood, as doctors
> routinely do, is a bit of an oxymoron) available, you can identify several
> cause-and-effect chain links that are contributing to your problem, even
> though there remains most of the problem that you still do NOT understand.
> If you can ONLY identify a cure to a single link between the root cause and
> the self-sustaining loop at the end, and identify any way at all to
> temporarily interrupt (doctors call this a "treatment) any link in the
> self-sustaining loop at the end, you can permanently cure the difficult
> problem, even though most of it remains a complete mystery. That this simple
> fact has remained hidden has misled AI and AGI, and will continue to mislead
> it until everyone involved understands this.
>
> >
> > Take most cancers. If you have one, what do you do? Well, there are a
> > lot of people out there offering you a lot of v. conflicting treatments and
> > proposals, and there is no neat, definitive answer to your problem.
> >
>
> Only because various misdirected interests are misleading the process. To
> illustrate, about a year ago I delivered a presentation to a roomfull of
> cancer survivors (and people who were trying to survive it). I explained the
> complex part that body temperature apparently played, and exactly why it was
> almost unknown for a cancer patient to have a "normal" 98.6F=37C body
> temperature. I then asked if anyone in the room had a 98.6F body
> temperature, and NO ONE DID. THERE is a pretty definitive answer, but
> getting it out to the "experts" is probably impossible because they have
> other dysfunctional models to use. I have an article about this if you would
> like it. There is a safe and simple one-day cure for erroneous body
> temperature, yet no cancer sufferer that I know of has ever done it!!!
>
>  That's the kind of problem a human general intelligence has to deal with,
> > and was designed to deal with.
> >
>
> Above is a simple case where even when presented with the answer, there is
> no way of propagating it to the rest of the human race. I have a friend who
> is the Director of Research for the Medical Center of a major University,
> whose own personal surgical experiences supported everything I said so he
> openly accepted it. I spent 4 hours discussing various approaches to getting
> this message out. His take - there was no path that he could identify to
> accomplish this. The detailed explanations of the paths that we considered
> would fill a small book. Places like Wikipedia have a filtering process that
> is guaranteed to block any such postings.
>
> In short, I wouldn't look at "human general intelligence" too closely, as
> except for some rare cases, it too is an oxymoron. It would be MUCH easier
> to build a really intelligent system than to build a "humanly intelligent"
> system.
>
>  Not the neat ones.
> >
> > (And how do I communicate that to you - get you & other AGI-ers to focus
> > on that? Because what you'll do is say: "Oh sure it's messy, but there's
> > gotta be a neat solution." You won't be able to stay with the messiness.
> > It's too uncomfortable. My "communication problem" is in itself a messy one
> > - like most problems of communicating to other people, e.g. how do you sell
> > your AGI system or get funding?)
> >
>
> YES, there IS a topic of mutual interest. There used to be people called
> "venture capitalists", but people doing this function no longer exist. There
> are now people calling themselves "venture capitalists" whom people used to
> call "investment bankers". There are "angel investors" who do the initial
> seed investing, but who lack the resources to follow up with major
> investments once the seed investment has succeeded. In short, I have sort of
> given up on finding anyone who has the CAPACITY to invest in any sort of
> AI/AGI, as all investors have money raised on a prospectus which, upon
> careful reading, guaranteed that they will NOT invest in AI/AGI. Some of the
> common exclusional reasons include:
> 1.  Where are your paying customers?
> 2.  What prior University research is this built upon?
> 3.  Where is your intellectual property protection?
> 4.  Where am I going to find other investors with whom to share the risk?
>
> Steve Richfield
>
>  ------------------------------
>   *agi* | Archives <http://www.listbox.com/member/archive/303/=now>
> <http://www.listbox.com/member/archive/rss/303/> | 
> Modify<http://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>
>  ------------------------------
>   *agi* | Archives <http://www.listbox.com/member/archive/303/=now>
> <http://www.listbox.com/member/archive/rss/303/> | 
> Modify<http://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to