On Dec 19, 6:33 am, einseele <einse...@gmail.com> wrote:
> Hi
>
> Yes the story is good, and falls into the common situation where there
> is an important obstacle between the "I" part of any equation, and the
> object.
>
> In other words, a classic subject-object relationship. In order to
> grip that object we find the something who stop us by definition, we
> ourselves.
>
> Under certain point of view, this "we ourselves" is language.
> Undestood as the instance able to uncover the world by blinding those
> who need it as the only ridiculous tool available.
>
> In another post you mentioned how a code line in a wrong position
> stopped your program and how difficult was to see a piece of text, of
> your own. Like if you wanted on purpose not to see what you yourself
> knew was there.
>
> I'm a linguist and I'm interested in two dimensional objects. I'm
> referring to objects which need by definition two parameters, and two
> only.
>
> Time looks like one of those if you think you need always two
> reference points.
> Any reference to time always needs duration, which is made of two
> references. Regardless physics, music, pseudo science, religions, etc,
> and even regardless if it even exists, time cannot be referred by one
> or more axes, but two.
>
> Same happens with temperature, and other concepts, and importantly to
> me, text (I'm including here any conventional list of elements which
> only need to be unique and to have a unique position in that list)
>
> Will appreciate your point of view as a software engineer, you deal
> with sequences, when you write you need to follow the order needed by
> the application you try to develop. So you are used to a bidimensional
> space as a writer
>
> rgd
>
> Carlos
>
> On Dec 19, 2:25 am, LCC <claylon...@comcast.net> wrote:
>
>
>
> > No responses? How about a joke...


Hmm I suspect that English is not your native tongue, so I will try
not to use any ambiguous words. Although time DURATION requires two
points in time to exist, time references can be a single point, as in
"at that point in time, I ..." The statement would be the same
regardless of the time at which it was made, with no second reference
point necessary. As you say, time durations seem to require only two
reference points, however at relativistic speeds particles which decay
in low speed environments have been VERIFIED to decay at new half-
lives dependent upon the relativistic speed of their travel. So no,
the idea that time duration is defined by the two points of reference
is not sufficient, because you also must know AT LEAST the magnitude
of the velocity with respect to the observer of the object under
examination.

In software there is an enormous freedom to innovate. The simplest
operations which a machine can perform are in the form of an operation
code (opcode) which is the "I" observing, and a source for the
operation to use to alter a machine state, or alternatively, a
destination such as in a clear register operation. Some operations
have implicit sources and destinations buried within the opcode rather
than being explicitly given as an opcode "argument". In such cases the
implicit information takes the form of a machine reference to itself
as in "my" arithmetic results flag register. I have not examined a
machine specific instruction set in the past 20 years, but in the
early 90s a trend was appearing to use what was called at that time
RISC (Reduced instruction set computers). In those architectures, most
of the "my" opcode implicit references were being replaced by explicit
opcode arguments which could take the form of either a constant, a
pointer to the information, a pointer plus offset, or a pointer plus
index and object size plus offset. More recently machine independent
languages have been developed with such names as "J" code which
replace "my" by "some abstract processor's" generally present
functional element, getting away from machine dependent opcodes
entirely.

In general you are correct that machine operations need to be
performed in a specific order to attain proper function. However,
there has always been a recognition that unless a dependency or
coupling relationship exists between the output of an operation and
the inputs of another operation which occurs later in the code stream,
then the later non-dependent operation can be shifted to an earlier
point in the code stream. An enormous amount of effort was put into
algorithms to detect dependencies and wherever possible to shift
operations to the earliest point in time at which they could be
correctly executed, particularly in the case of operations inside
execution loops. I stopped writing software in 1993, so I cannot
report on the current state of the art, particularly that "object
oriented" philosophy which wants to enforce rigid controls over
information AVAILABILITY within functions. If it had not reared its
ugly head, then there would probably by now be global application
level optimizers which pre-calculate as much information as possible
which is needed within calling functions before function call so that
stack relative operations can be reduced, substituting instead clean
simple pointers to memory.

The advent of multi-tiered memory structures such as cache on
processor chips introduced a new wrinkle to the optimization game,
namely the desire to use the same memory locations as often as
possible within functional code blocks, and relieved the processor
users of the necessity to generate what is known as "expanded code".
Expanded code is an attempt to reduce execution time by replacing
execution loops with the same statements over and over again, varying
only constants and pointers, while avoiding those time consuming
offsets and indexes. Expanded code was in danger of making recursion
extinct, which was probably why so many people in applications which
lacked time criticality despised it. More recently, we now have multi-
threaded architectures, which take advantage of multiple processors to
concurrently perform machine operations which are known to have no
coupling relationships.

In summary, I was not aware that any software engineer ever had the
luxury of dealing with just a two dimensional space of neatly ordered
sequences of instructions. A particular processor on a particular
machine does not even have that luxury due to the pipelining of
instructions. Not only is the current instruction being performed, but
also the next instruction to be performed is fetching its arguments so
that it will be immediately ready for execution when its turn comes.
Further along the pipeline looking ahead to future instructions you
have instructions being fetched from cache, or in the worst case from
memory, requiring the cache processor to negotiate with itself over
what gets overwritten as least likely to be called upon soon, and with
memory controllers regarding how big of a chunk of memory the cache
needs to have delivered to it, from what location. Pipeline control is
a topic with which I am somewhat familiar, having worked on
development of instruction test sequences for the SJS version of the
RS6000 chip set in 1989-1990. That task required reading the standard
cell code being written to generate the chip architecture, locating
vulnerabilities and oversights in the design, and coming up with
opcode sequences to demonstrate the defects so that the chip designers
could develop a defect free chip design.


Lonnie Courtney Clay

-- 
You received this message because you are subscribed to the Google Groups 
"Epistemology" group.
To post to this group, send email to epistemol...@googlegroups.com.
To unsubscribe from this group, send email to 
epistemology+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/epistemology?hl=en.

Reply via email to