[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-17 Thread Paul Koning via cctalk



> On Aug 17, 2024, at 8:32 AM, Peter Corlett via cctalk  
> wrote:
> 
> ...
>> The problem is the native register width keeps changing with every cpu. C
>> was quick and dirty language for the PDP 11, with 16 bit ints. They never
>> planned UNIX or C or Hardware would change like it did, so one gets a
>> patched version of C. That reminds me I use gets and have to get a older
>> version of C.
> 
> They'd have had to be fairly blinkered to not notice the S/360 series which
> had been around for years before the PDP-11 came out. It doesn't take a
> particularly large crystal ball to realise that computers got smaller and
> cheaper over time and features from larger machines such as wider registers
> would filter down into minicomputers and microcomputers.

Not to mention that K&R had experience with the PDP-7, which is an 18 bit word 
oriented machine.  And a whole lot of other machines of that era had word 
lengths different from 16, and apart from the S/360 and the Nova most weren't 
powers of two.

> But C also seems to ignore a lot of the stuff we already knew in the 1960s
> about how to design languages to avoid programmers making various common
> mistakes, so those were quite large blinkers. They've never been taken off
> either: when Rob and Ken went to work for Google they came up with a "new"
> C-like language which makes many of the same mistakes, plus some new ones,
> and it is also more bloated and can't even be used to write bare-metal stuff
> which is one of the few things one might reasonably need C for in the first
> place.

C, especially its early incarnations, could be called a semi-assembly language. 
 For example, you can tell that struct declarations originally amounted simply 
to symbolic offsets (you could use a field name declared for struct a in 
operations on types of struct b).  And yes, ALGOL showed the way with a far 
cleaner design, and ALGOL extensions existed to do all sorts of hard work with 
it.  Consider the Burroughs 5500 series and their software, all written in 
ALGOL or slightly tweaked extensions of same, including the OS.

> ...
> Complex subroutine nesting can be done just fine on a CPU "optimised for"
> running C. For example, you can synthesise an anonymous structure to hold
> pointers to or copies of the outer variables used in the inner function, and
> have the inner function take that as its first parameter. This is perfectly
> doable in C itself, but nobody would bother because it's a lot of
> error-prone boilerplate. But if the compiler does it automatically, it
> suddenly opens up a lot more design options which result in cleaner code.

Absolutely.  The first ALGOL compiler was written for the Electrologica X1, a 
machine with two accumulators plus one index register, a one address 
instruction set, and no stack or complex addressing modes.  It worked just 
fine, it simply meant that you had to do some things in software that other 
machines might implement in hardware (or, more likely, in microcode).  Or 
consider the CDC 6000 mainframes, RISC machines with no stack, no addressing 
modes, and not just an ALGOL 60 but even an ALGOL 68 compiler.

On the other hand there was the successor of the X1, the X8, which adds a stack 
both for data and subroutine calling, as well as "display" addressing modes to 
deal directly with variable references in nested blocks up to 63 deep.  Yes, 
that makes the generated code from the ALGOL compiler shorter, but it doesn't 
necessarily make things any faster and I don't know that such features were 
ever seen again.

paul



[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-17 Thread Peter Corlett via cctalk
On Fri, Aug 16, 2024 at 04:00:40PM -0600, ben via cctalk wrote:
> On 2024-08-16 8:56 a.m., Peter Corlett via cctalk wrote:
[...]
>> This makes them a perfect match for a brain-dead language. But what does
>> it even *mean* to "automaticaly promote smaller data types to larger
>> ones"? That's a rhetorical question, because your answer will probably
>> disagree with what the C standard actually says :)

> I have yet to read a standard, I can never find, or afford the
> documentation.

Google "N3096.PDF". You're welcome.

[...]
> Now I need to get a cross assembler and c compiler for the 68K.

The GNU tools work fine for me for cross-compiling to bare-metal m68k. I use
it on Unix systems, but you can probably get it working on Windows if you
must. I even maintain a set of patches for GCC to do register parameters,
although unless you specifically need that functionality, upstream GCC is
just fine.

[...]
>> Now, what kind of badly-written code and/or braindead programming
>> language would go out of its way to be inefficient and use 32-bit
>> arithmetic instead of the native register width?

> The problem is the native register width keeps changing with every cpu. C
> was quick and dirty language for the PDP 11, with 16 bit ints. They never
> planned UNIX or C or Hardware would change like it did, so one gets a
> patched version of C. That reminds me I use gets and have to get a older
> version of C.

They'd have had to be fairly blinkered to not notice the S/360 series which
had been around for years before the PDP-11 came out. It doesn't take a
particularly large crystal ball to realise that computers got smaller and
cheaper over time and features from larger machines such as wider registers
would filter down into minicomputers and microcomputers.

But C also seems to ignore a lot of the stuff we already knew in the 1960s
about how to design languages to avoid programmers making various common
mistakes, so those were quite large blinkers. They've never been taken off
either: when Rob and Ken went to work for Google they came up with a "new"
C-like language which makes many of the same mistakes, plus some new ones,
and it is also more bloated and can't even be used to write bare-metal stuff
which is one of the few things one might reasonably need C for in the first
place.

[...]
>> It's not just modern hardware which is a poor fit for C: classic hardware
>> is too. Because of a lot of architectural assumptions in the C model, it
>> is hard to generate efficient code for the 6502 or Z80, for example.
> or any PDP not 10 or 11.

> I heard that AT&T had C cpu but it turned out to be a flop. C main
> advantage, was a stack for local varables and return addresses and none of
> the complex subroutine nesting of ALGOL or PASCAL.

That'd be the AT&T Hobbit, "optimized for running code compiled from the C
programming language". It's basically an early RISC design which spent too
much time in development and the released product was buggy, slow,
expensive, and had some unconventional design decisions which would have
scared off potential users. Had it come out earlier it may have had a
chance, but then ARM came along and that was that.

Complex subroutine nesting can be done just fine on a CPU "optimised for"
running C. For example, you can synthesise an anonymous structure to hold
pointers to or copies of the outer variables used in the inner function, and
have the inner function take that as its first parameter. This is perfectly
doable in C itself, but nobody would bother because it's a lot of
error-prone boilerplate. But if the compiler does it automatically, it
suddenly opens up a lot more design options which result in cleaner code.

>> But please, feel free to tell me how C is just fine and it's the CPUs
>> which are at fault, even those which are heavily-optimised to run typical
>> C code.

> A computer system, CPU , memory, IO , video & mice all have to share the
> same pie. If you want one thing to go faster, something else must go
> slower. C's model is random access main memory for simple variables and
> array data. Register was for a simple pointer or data. Caches may seem to
> speed things up, but they can't handle random data
> (REAL(I+3,J+3)+REAL(I-3,J-3)+REAL(I+3,J-3)+REAL(I-3,J+3)/4.0)+REAL(I,J)

A "computer system" today is a network of multiple CPUs running
autonomously, and are merely co-ordinated by the main CPU. Adding a faster
disk to my system does not per se make the main CPU slower, although of
course the improved performance means that the CPU load may go up purely
because it is not being held back on I/O and can achieve higher throughput.
Graphics cards are a very extreme form of moving things off the main CPU.
They are special-purpose parallel CPUs which can number-crunch certain
problems (such as making Tamriel beautiful) many orders of magnitude faster
than the main CPU.

And did I write "main CPU"? A typical PC today has four "main" CPUs on one
die. There's even a network between thos

[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-16 Thread ben via cctalk

On 2024-08-16 8:56 a.m., Peter Corlett via cctalk wrote:

On Thu, Aug 15, 2024 at 01:41:20PM -0600, ben via cctalk wrote:
[...]

I don't know about the VAX,but my gripe is the x86 and the 68000 don't
automaticaly promote smaller data types to larger ones. What little
programming I have done was in C never cared about that detail. Now I can
see way it is hard to generate good code in C when all the CPU's are brain
dead in that aspect.


This makes them a perfect match for a brain-dead language. But what does it
even *mean* to "automaticaly promote smaller data types to larger ones"?
That's a rhetorical question, because your answer will probably disagree
with what the C standard actually says :)


I have yet to read a standard, I can never find, or afford the 
documentation.
I used Microsoft C for DOS,and had that as standard model as well as 
8088 cpu. C for the most part was 16 bit code, with a long here and there.


I use Pelles C for windows version 8, since windows dropped 32 bit 
programs.


As a hobby project, I am building a CPU of some size 24 bits
or less. Tried a FPGA card for the last decade, but the internal routing 
kept screwing up. Now that we got cheap PCB's from china, I had
2901 Bit slice machine almost working. I can read/write from the front 
panel,but programs don't work. Software emulation in C under windows

works only as prototype code. I picked up a cheap 68K board since it has
no MMU and just static ram,I can use that to emulate my hardware design.
Now I need to get a cross assembler and c compiler for the 68K.
When I get the C emulator code working, I can later write a faster 
version in assembler. When I started this project any software my I

could need would be written in the small C subset of C, or a revise a 16
bit C compiler source code.




Now, what kind of badly-written code and/or braindead programming language
would go out of its way to be inefficient and use 32-bit arithmetic instead
of the native register width?


The problem is the native register width keeps changing with every cpu.
C was quick and dirty language for the PDP 11, with 16 bit ints. They 
never planned UNIX or C or Hardware would change like it did, so one

gets a patched version of C. That reminds me I use gets and have to get
a older version of C.







I'm sure you can "C" where I'm going here. `int` is extremely special to it.
C really wants to do everything with 32-bit values. Smaller values are
widened, larger values are very grudgingly tolerated. C programmers
habitually use `int` as array indices rather than `size_t`, particularly in
`for` loops. Apparently everything is *still* a VAX. So on 64-bit platforms,
the index needs to be widened before adding to the pointer, and there's so
much terrible C code out there -- as if there is any other kind -- that the
CPUs need hardware mitigations to defend against it.


I still using DOS c compilers,for small C. Int just has one size - 16.
No longs, shorts or other stuff. DOSBOX-X is nice in that I can run dos
programs or windows command line programs.





It's not just modern hardware which is a poor fit for C: classic hardware is
too. Because of a lot of architectural assumptions in the C model, it is
hard to generate efficient code for the 6502 or Z80, for example.

or any PDP not 10 or 11.

I heard that AT&T had C cpu but it turned out to be a flop.
C main advantage, was a stack for local varables and return addresses
and none of the complex subroutine nesting of ALGOL or PASCAL.


But please, feel free to tell me how C is just fine and it's the CPUs which
are at fault, even those which are heavily-optimised to run typical C code.

A computer system, CPU , memory, IO , video & mice all have to share the 
same pie. If you want one thing to go faster, something else must go 
slower. C's model is random access main memory for simple variables

and array data. Register was for a simple pointer or data. Caches may
seem to speed things up, but they can't handle random data 
(REAL(I+3,J+3)+REAL(I-3,J-3)+REAL(I+3,J-3)+REAL(I-3,J+3)/4.0)+REAL(I,J)


I will stick to a REAL PDP-8. I know a TAD takes 1.5 us, not 1.7 us 70% 
of the time and 1.4 us the other 30%.

Real time OS's and CPU's are out there, how else would my toaster know
when to burn my toast.

Only knowing the over all structure, of a program and hardware
can one optimize it.
Ben.









[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-16 Thread Peter Corlett via cctalk
On Thu, Aug 15, 2024 at 01:41:20PM -0600, ben via cctalk wrote:
[...]
> I don't know about the VAX,but my gripe is the x86 and the 68000 don't
> automaticaly promote smaller data types to larger ones. What little
> programming I have done was in C never cared about that detail. Now I can
> see way it is hard to generate good code in C when all the CPU's are brain
> dead in that aspect.

This makes them a perfect match for a brain-dead language. But what does it
even *mean* to "automaticaly promote smaller data types to larger ones"?
That's a rhetorical question, because your answer will probably disagree
with what the C standard actually says :)

Widening of integers is normally done through left-extending the sign bit.
For unsigned values, the sign bit is implicitly zero although we usually say
"sign extend" or "zero extend" to be clearer about whether we're dealing
with signed or unsigned values. C will typically do one or the other of
these, but not always the one you expected.

For sign-extension, m68k has the EXT instruction, and x86 has CBW/CWDE. For
zero-extension, pre-clear the register before loading a smaller value into a
subregister. From the 386 onwards, there are MOVZX/MOVSX which do
load-and-extend in a single operation. If the result of a calculation is
then truncated when written back to memory, then the upper bits of the
register may have never had an effect on the result and did not need to be
set to a known value, so this palaver is quite unnecessary. The thing was
only extended in the first place because C's promotion rules required it to
be, and the compiler backend has had to prove otherwise to eliminate it
again.

As it happens, it's not unnessary on modern out-of-order CPUs, so there's a
lot more use of MOVZX etc on code compiled for x86-64. Loading into a
subregister without clearing the full register first introduces a false
dependency on the old value of the upper bits, resulting in a pipeline stall
and performance hit. However, this is "just" for performance rather than
correctness.

Said performance hit is likely the main reason why x86-64 automatically
zero-extends when loading a 32-bit value into a register, and so MOVZX is no
longer required for that operation. So in fact x86 *does* "automaticaly
promote smaller data types to larger ones". Not doing so would cause an
unacceptable performance hit when running 32-bit code (which was basically
all of it back in 2003 when the first Opteron was released) or 64-bit code
making heavy use of 32-bit data.

Now, what kind of badly-written code and/or braindead programming language
would go out of its way to be inefficient and use 32-bit arithmetic instead
of the native register width?

I'm sure you can "C" where I'm going here. `int` is extremely special to it.
C really wants to do everything with 32-bit values. Smaller values are
widened, larger values are very grudgingly tolerated. C programmers
habitually use `int` as array indices rather than `size_t`, particularly in
`for` loops. Apparently everything is *still* a VAX. So on 64-bit platforms,
the index needs to be widened before adding to the pointer, and there's so
much terrible C code out there -- as if there is any other kind -- that the
CPUs need hardware mitigations to defend against it.

It's not just modern hardware which is a poor fit for C: classic hardware is
too. Because of a lot of architectural assumptions in the C model, it is
hard to generate efficient code for the 6502 or Z80, for example.

But please, feel free to tell me how C is just fine and it's the CPUs which
are at fault, even those which are heavily-optimised to run typical C code.



[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-16 Thread Mike Katz via cctalk

Ben,

The purpose of the stdint.h file is to allow the programmer ti specify 
the size of the variables.


On some systems is an int 32 bits or 64 bits (or even 16 bits on older 
systems or 16 bit micros).  The size of an int is not specifically 
defined in the C standard.


Especially when doing embedded coding the size of a variable (or the 
size of the data pointed to by a pointer) is very important.


The stdint.h file is created by the authors of the compiler so that the 
programmer can specify the size of the variable that he wants. /int A/ 
may or may not be 32 bits but /int32_t A/ will always be 32 bits.


This is mostly not a problem on modern 32 bit microprocessors where an 
int is 32 bits  and a short is 16 bits. However, on that system is a 
long 32 bits or 64 bits?  By having the typedefs in the stdint.h file, 
the programmer can specify the exact size of the variable.




On 8/16/2024 1:38 AM, ben wrote:

On 2024-08-15 7:46 p.m., Mike Katz wrote:
That is the reason for the stdint.h file. Where you specify the width 
of the variable in bits


Looks like a useless file to me.
I never liked any the standards made to C after K&R. Seems more driven
by the latest crappy hardware intel makes, than a language designed by 
people who use the product.
 C++ or JAVA never made sense because  every class is too different 
from any other object.Don't say how windows are a good example of object,

they are foobar-ed from the start as they deal in pixels, rather than
a fractional screen display.Text windows worked under DOS.something easy
to program. I don't want write a whole operating system to use modern
software like windows.

Grumpy Ben, trying to find a embedded C compiler for the 68000.
PS: Perhaps if they had good textbook for C and the different
standards I might view modern C with less distrust.





[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread ben via cctalk

On 2024-08-15 7:46 p.m., Mike Katz wrote:
That is the reason for the stdint.h file.  Where you specify the width 
of the variable in bits


Looks like a useless file to me.
I never liked any the standards made to C after K&R. Seems more driven
by the latest crappy hardware intel makes, than a language designed by 
people who use the product.
 C++ or JAVA never made sense because  every class is too different 
from any other object.Don't say how windows are a good example of object,

they are foobar-ed from the start as they deal in pixels, rather than
a fractional screen display.Text windows worked under DOS.something easy
to program. I don't want write a whole operating system to use modern
software like windows.

Grumpy Ben, trying to find a embedded C compiler for the 68000.
PS: Perhaps if they had good textbook for C and the different
standards I might view modern C with less distrust.





[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk

Fred,

You are correct in all of your examples.  That is why many standards 
specify things like /multiple function calls should not be used in a 
single expression/.  The compiler will optimize out any unecessarry 
memory reads and writes so rewriting:


X = foo() + bar();

as

X = foo();
C += bar();

Will force the correct order of execution of the functions while not 
taking any more CPU cycles.


  Mike

On 8/15/2024 7:32 PM, Fred Cisin via cctalk wrote:

On Thu, 15 Aug 2024, Mike Katz wrote:
C has specific specifications for what is promoted when and how. They 
are not ambiguous just not known by many.
I worked for a C compiler company so I'm may be a bit more familiar 
with the actual C specs and how the compiler works.
However, I totally agree with you.  I heavily typecast and 
parenthesize my code to avoid any possible ambiguity.  Sometimes for 
the compiler and sometimes for someone else reading my code.


I will readily concede that ANSI C has fewer problems with ambiguous 
code than the K&R C that I learned.


But, for example, in:
X = foo() + bar();

has it been defined which order the functions of foo() and bar() are 
evaluated?  Consider the possibility that either or both alter 
variable that the other function also uses.
(Stupidly simpe example, one function increments a variable, and the 
other one doubles it)


As another example of code that I would avoid,
int x=1,y=1;
x = x++ + x++;
y = ++y + ++Y;
give 2, 3, 4, or 5?
is heavily dependent on exactly when the increments get done.

But, thorough careful typecasting, use of intermediate variables, etc. 
can eliminate all such problems.

'course "optimizing compilers" can (but shouldn't) alter your code.

If you don't explicitly specify exactly what you want, "C gives you 
enough rope to shoot yourself in the foot" (as Holub titled one of his 
books)



But, I've always loved how easily C will get out of the way when you 
want to get closer to the hardware.


--
Grumpy Ol' Fred ci...@xenosoft.com



[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk
I have written several coding standards and comments are always primary 
in importance.


The Misra C standard does a pretty good job of forcing the programmer to 
aim for something other than their foot with their rope 🙂


I am amazed at how many fresh outs I have met who really can't program 
their way out of a paper bag.


On 8/15/2024 7:46 PM, Fred Cisin via cctalk wrote:
When I was teaching C, it was sometimes quite difficult to help 
students who had firm assumptions about things that you can'r assume.  
Such as the sequence of operations in the multiple iterations examples 
that we both used.  I tried desperately to get them to do extensive 
commnets, and use typecasts even when they could have been left out.




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Fred Cisin via cctalk

On Thu, 15 Aug 2024, Mike Katz wrote:
I am amazed at how many fresh outs I have met who really can't program their 
way out of a paper bag.


Advanced programming techniques don't help until they can actually 
successfully think about the problem.



I had a guy working for me VERY briefly, with a UC Berkeley degree, but he 
couldn't figure out how to do 3-up mailing labels on a daisy wheel 
printer!
(sequence not mattering becuse they were manually peele off and use for a 
mailing.)
He couldn't figure out any way to do it other than needing a way to roll 
the paper back to get back to the top for the next column!  not on THAT 
printer!
(simple way - read three records into memory, print them side by side, and 
then advance the paper)

He had a few other similar shortcomings.
I let him stay around until he peeled and stuck all of the labels, and to 
give him time to find another job.



I gave a final exam question on how to sort/sequence the records of a 
large file that was too big to fit into memory.
Several students who had gotten their start at the university insisted 
that the only way it could be done was to add more memory.
(simple way - read a memory sized block from the file and sort it; do that 
again, until you have a whole bunch of sorted shorter files, do a merge 
sort of those)


Another: "A client has a large file that is in order.  But each 
day/week/month, additional records are appended to it.  What's the best 
sort algorithm to get the file back into order?"
(simple 1: put the new records into a separate file, sort that; then do a 
merge sort between that and the main file.
simple 2: (if it isn't too large to manage) a bubble sort, with each pass 
starting at the ENF of the file where the new records are, and working 
towards th beginning, or a "shaker sort" that alternates direction.  The 
maximum nuber of passes is the number of records that were out of order.
(a "shaker sort" is the best sort algorithm for taking advantage of any 
existing order, such as a few random records being in the wrong place)


--
Grumpy Ol' Fred ci...@xenosoft.com














[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk
That is the reason for the stdint.h file.  Where you specify the width 
of the variable in bits


int8_t, int16_t, uint16_t. etc.

On 8/15/2024 8:39 PM, ben via cctalk wrote:

On 2024-08-15 6:46 p.m., Fred Cisin via cctalk wrote:
When I was teaching C, it was sometimes quite difficult to help 
students who had firm assumptions about things that you can'r 
assume.  Such as the sequence of operations in the multiple 
iterations examples that we both used.  I tried desperately to get 
them to do extensive commnets, and use typecasts even when they could 
have been left out.


I keep assuming C is still 16 bit K&R. Software tends to depend on the
fact bytes are 8 bits, and every thing a 2,4,8 bytes wide and the newest
hardware/software/underwear^H^H^H^H^H^H^H is the best.
PL/I tried to fit data types, to have variable width that I think was 
a good idea. foobar Binary:unsigned:36 bits:mixed 
endian,volatile,module blah,dynamic array x y z. It least then you 
know what foobar does.

HUMBUG foo, not so clear.

Ben, still thinking 18 bit computers are just the right size for 
personal use, and some day I will have hardware.






[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk

Fred,

That is true, Order of expression is undefined between execution points 
that is why the following statement can produce different results on 
different compilers:


A = 1;
F = A++ * A++;

Without the use of parenthesis the is no way for the user to know 
beforehand what the value of F will be.  The only guarantee is that when 
the before the next instruction is executed the all postfix operators 
will be evaluated prior to the start of the next C statement.


As a general rule rvalue expressions are calculated by the pre-compiler 
and not the compiler.  So the line:


ulDays  = ulSeconds / ( 60 * 60 * 24 );

Would be converted by the precompiler to:

ulDay = ulSeconds / 86400;

The calculation of the lvalue ulSeconds / 86400 will be handled at run time.

However, if ulSeconds is defined as a const it is possible that a smart 
precompiler will do the entire calculation and only the assignment will 
be done at runtime.


It is possible that the volatile keyword might cause the order of 
expression to be altered.


uint32_t * volatile ulpDMAAddress = 0x;  // Note this is a 
volatile pointer and NOT a pointer to volatile data.

uint32_t *ulpMyPointer;

ulpMyPointer = *ulpDMAAddress++ + *ulpDMAAddress++;

My mind is getting numb just looking at that code.  Suffice it to say 
that using multiple prefix/postfix operations in a single execution 
point is heavily deprecated because the actual results are 
implementation defined and my even be different depending upon what 
other math surrounds it.


Another implementation specific feature of C is the order of bits in bit 
fields.  They can be assigned from most significant to least significant 
or vice-versa.  It is totally up to the compiler.


As Allan Holub says C and  C++, in his book of the same name, gives the 
programmer "Enough Rope To Shoot Yourself in the Foot"





On 8/15/2024 6:45 PM, Fred Cisin via cctalk wrote:

It is not the hardware that is at fault.
If anybody else is to blame, it is the compiler.


On Thu, 15 Aug 2024, Paul Koning wrote:
More likely the language designers, assuming the compiler doesn't 
have a standards violation in its code.  In the case of C, the type 
promotion rules that were just explained are rather bizarre and 
surprising.  Other languages do it differently, with perhaps fewer 
surprises.  Some define it very carefully (ALGOL 68 comes to mind), 
some not so much.


C very explicitly leaves some thing undefined, supposedly to work with 
more machines, and Kernighan & Ritchie say that it is the 
responsibility of the programmer to create unambiguous code.
for example, evaluation of expressions in the lvalue might be done 
before OR after evaluation of expressions in th rvalue


Some other languages are much stricter on types, etc. and have fewer 
ambiguities.




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread ben via cctalk

On 2024-08-15 6:46 p.m., Fred Cisin via cctalk wrote:
When I was teaching C, it was sometimes quite difficult to help students 
who had firm assumptions about things that you can'r assume.  Such as 
the sequence of operations in the multiple iterations examples that we 
both used.  I tried desperately to get them to do extensive commnets, 
and use typecasts even when they could have been left out.


I keep assuming C is still 16 bit K&R. Software tends to depend on the
fact bytes are 8 bits, and every thing a 2,4,8 bytes wide and the newest
hardware/software/underwear^H^H^H^H^H^H^H is the best.
PL/I tried to fit data types, to have variable width that I think was a 
good idea. foobar Binary:unsigned:36 bits:mixed endian,volatile,module 
blah,dynamic array x y z. It least then you know what foobar does.

HUMBUG foo, not so clear.

Ben, still thinking 18 bit computers are just the right size for 
personal use, and some day I will have hardware.




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk
Will according to the ISE/IEC 9899:2018 C Standard section 6.3.1.8 you 
are incorrect.  Please the the emboldened line below.


6.3.1.8 Usual arithmetic conversions

1. Many operators that expect operands of arithmetic type cause 
conversions and yield result types in
a similar way. The purpose is to determine a common real type for the 
operands and result. For the
specified operands, each operand is converted, without change of type 
domain, to a type whose
corresponding real type is the common real type. Unless explicitly 
stated otherwise, the common
real type is also the corresponding real type of the result, whose type 
domain is the type domain of
the operands if they are the same, and complex otherwise. This pattern 
is called the usual arithmetic

conversions:

First, if the corresponding real type of either operand is long double, 
the other operand
is converted, without change of type domain, to a type whose 
corresponding real type is

long double.

Otherwise, if the corresponding real type of either operand is double, 
the other operand is
converted, without change of type domain, to a type whose corresponding 
real type is double.


Otherwise, if the corresponding real type of either operand is float, 
the other operand is
converted, without change of type domain, to a type whose corresponding 
real type is float.64)


Otherwise, the integer promotions are performed on both operands. Then 
the following rules

are applied to the promoted operands:

*    If both operands have the same type, then no further conversion is 
needed.*


    Otherwise, if both operands have signed integer types or both have 
unsigned integer
    types, the operand with the type of lesser integer conversion rank 
is converted to the type

    of the operand with greater rank.

    Otherwise, if the operand that has unsigned integer type has rank 
greater or equal to
    the rank of the type of the other operand, then the operand with 
signed integer type is

    converted to the type of the operand with unsigned integer type.

    Otherwise, if the type of the operand with signed integer type can 
represent all of the
    values of the type of the operand with unsigned integer type, then 
the operand with
    unsigned integer type is converted to the type of the operand with 
signed integer type.


    Otherwise, both operands are converted to the unsigned integer type 
corresponding to

    the type of the operand with signed integer type.

2.  The values of floating operands and of the results of floating 
expressions may be represented in
greater range and precision than that required by the type; the types 
are not changed thereby. The
cast and assignment operators are still required to remove extra range 
and precision.  See

5.2.4.2.2 regarding evaluation formats.



On 8/15/2024 6:54 PM, Will Cooke via cctalk wrote:



On 08/15/2024 6:10 PM EDT Mike Katz via cctalk  wrote:


I'm pretty certain you are wrong about the byte case below.  The C standard 
says something about no math will be done smaller than a short.  I don't have 
it handy so can't quote exactly.
But what that means is before the two bytes are added, they are promoted to 
short / uint16_t and then added.



int foo( void )
{
uint32_t Long1 = 10;
uint32_t Long2 = 20;
uint16_t Short1 = 10;
unit16_t Short2 = 20;
uint8_t Byte1 = 10;
uint8_t Byte2 = 20;
//

...

// Everything to the right of the equals will not be promoted at
all, the math will be performed and the result will be promoted to a
uint16 when assigned.
//
Short1 = Byte1 + Byte2;


In this program segment:

uint8_t a = 255;
uint8_t b = 255;
uint16_t c = 0;
c = a + b;
printf("c: %d \n", c);

it will print 510 instead of the 254 that would result if it were added as 
bytes.

Will


Grownups never understand anything by themselves and it is tiresome for 
children to be always and forever explaining things to them,

Antoine de Saint-Exupery in The Little Prince


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Fred Cisin via cctalk
When I was teaching C, it was sometimes quite difficult to help students 
who had firm assumptions about things that you can'r assume.  Such as the 
sequence of operations in the multiple iterations examples that we both 
used.  I tried desperately to get them to do extensive commnets, and use 
typecasts even when they could have been left out.


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Fred Cisin via cctalk
Ijust sent a post that agrees so thoroughly with what you just wrote that 
we even both used the same reference to Holub!


--
Grumpy Ol' Fred ci...@xenosoft.com

On Thu, 15 Aug 2024, Mike Katz wrote:


Fred,

That is true, Order of expression is undefined between execution points that 
is why the following statement can produce different results on different 
compilers:


A = 1;
F = A++ * A++;

Without the use of parenthesis the is no way for the user to know beforehand 
what the value of F will be.  The only guarantee is that when the before the 
next instruction is executed the all postfix operators will be evaluated 
prior to the start of the next C statement.


As a general rule rvalue expressions are calculated by the pre-compiler and 
not the compiler.  So the line:


ulDays  = ulSeconds / ( 60 * 60 * 24 );

Would be converted by the precompiler to:

ulDay = ulSeconds / 86400;

The calculation of the lvalue ulSeconds / 86400 will be handled at run time.

However, if ulSeconds is defined as a const it is possible that a smart 
precompiler will do the entire calculation and only the assignment will be 
done at runtime.


It is possible that the volatile keyword might cause the order of expression 
to be altered.


uint32_t * volatile ulpDMAAddress = 0x;  // Note this is a volatile 
pointer and NOT a pointer to volatile data.

uint32_t *ulpMyPointer;

ulpMyPointer = *ulpDMAAddress++ + *ulpDMAAddress++;

My mind is getting numb just looking at that code.  Suffice it to say that 
using multiple prefix/postfix operations in a single execution point is 
heavily deprecated because the actual results are implementation defined and 
my even be different depending upon what other math surrounds it.


Another implementation specific feature of C is the order of bits in bit 
fields.  They can be assigned from most significant to least significant or 
vice-versa.  It is totally up to the compiler.


As Allan Holub says C and  C++, in his book of the same name, gives the 
programmer "Enough Rope To Shoot Yourself in the Foot"





On 8/15/2024 6:45 PM, Fred Cisin via cctalk wrote:

It is not the hardware that is at fault.
If anybody else is to blame, it is the compiler.


On Thu, 15 Aug 2024, Paul Koning wrote:
More likely the language designers, assuming the compiler doesn't have a 
standards violation in its code.  In the case of C, the type promotion 
rules that were just explained are rather bizarre and surprising.  Other 
languages do it differently, with perhaps fewer surprises.  Some define 
it very carefully (ALGOL 68 comes to mind), some not so much.


C very explicitly leaves some thing undefined, supposedly to work with more 
machines, and Kernighan & Ritchie say that it is the responsibility of the 
programmer to create unambiguous code.
for example, evaluation of expressions in the lvalue might be done before 
OR after evaluation of expressions in th rvalue


Some other languages are much stricter on types, etc. and have fewer 
ambiguities.


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Fred Cisin via cctalk

On Thu, 15 Aug 2024, Mike Katz wrote:
C has specific specifications for what is promoted when and how. They are not 
ambiguous just not known by many.
I worked for a C compiler company so I'm may be a bit more familiar with the 
actual C specs and how the compiler works.
However, I totally agree with you.  I heavily typecast and parenthesize my 
code to avoid any possible ambiguity.  Sometimes for the compiler and 
sometimes for someone else reading my code.


I will readily concede that ANSI C has fewer problems with ambiguous code 
than the K&R C that I learned.


But, for example, in:
X = foo() + bar();

has it been defined which order the functions of foo() and bar() 
are evaluated?  Consider the possibility that either or both alter 
variable that the other function also uses.
(Stupidly simpe example, one function increments a variable, and the other 
one doubles it)


As another example of code that I would avoid,
int x=1,y=1;
x = x++ + x++;
y = ++y + ++Y;
give 2, 3, 4, or 5?
is heavily dependent on exactly when the increments get done.

But, thorough careful typecasting, use of intermediate variables, etc. can 
eliminate all such problems.

'course "optimizing compilers" can (but shouldn't) alter your code.

If you don't explicitly specify exactly what you want, "C gives you enough 
rope to shoot yourself in the foot" (as Holub titled one of his books)



But, I've always loved how easily C will get out of the way when you want 
to get closer to the hardware.


--
Grumpy Ol' Fred ci...@xenosoft.com



[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Will Cooke via cctalk



> On 08/15/2024 6:10 PM EDT Mike Katz via cctalk  wrote:
>

I'm pretty certain you are wrong about the byte case below.  The C standard 
says something about no math will be done smaller than a short.  I don't have 
it handy so can't quote exactly.
But what that means is before the two bytes are added, they are promoted to 
short / uint16_t and then added.


> int foo( void )
> {
> uint32_t Long1 = 10;
> uint32_t Long2 = 20;
> uint16_t Short1 = 10;
> unit16_t Short2 = 20;
> uint8_t Byte1 = 10;
> uint8_t Byte2 = 20;
> //
...
> // Everything to the right of the equals will not be promoted at
> all, the math will be performed and the result will be promoted to a
> uint16 when assigned.
> //
> Short1 = Byte1 + Byte2;
>

In this program segment:

uint8_t a = 255;
uint8_t b = 255;
uint16_t c = 0;
c = a + b;
printf("c: %d \n", c);

it will print 510 instead of the 254 that would result if it were added as 
bytes.

Will


Grownups never understand anything by themselves and it is tiresome for 
children to be always and forever explaining things to them,

Antoine de Saint-Exupery in The Little Prince


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk

Mr. Grumpy :)

C has specific specifications for what is promoted when and how. They 
are not ambiguous just not known by many.


I worked for a C compiler company so I'm may be a bit more familiar with 
the actual C specs and how the compiler works.


However, I totally agree with you.  I heavily typecast and parenthesize 
my code to avoid any possible ambiguity.  Sometimes for the compiler and 
sometimes for someone else reading my code.


   Mike

On 8/15/2024 6:09 PM, Fred Cisin via cctalk wrote:

On Thu, 15 Aug 2024, Paul Koning via cctalk wrote:

I don't know about the VAX,but my gripe is the x86 and the 68000 don't
automaticaly promote smaller data types to larger ones. What little
programming I have done was in C never cared about that detail.
Now I can see way it is hard to generate good code in C when all the
CPU's are brain dead in that aspect.


It is not the hardware that is at fault.
If anybody else is to blame, it is the compiler.

int8 A = -1;
uint8 B = 255;
/* Those have the same bit pattern! */
int16 X;
int16 Y;
X = A;
Y = B;
will X and Y have a bit patterns of    , or   
 


If you expect them to be "promoted", you are giving ambiguous 
instructions to the compiler.

The CPU isn't ever going to know.

THAT is why explicit typecasting is the way to go.

--
Grumpy Ol' Fred ci...@xenosoft.com




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Fred Cisin via cctalk

It is not the hardware that is at fault.
If anybody else is to blame, it is the compiler.


On Thu, 15 Aug 2024, Paul Koning wrote:

More likely the language designers, assuming the compiler doesn't have a 
standards violation in its code.  In the case of C, the type promotion rules 
that were just explained are rather bizarre and surprising.  Other languages do 
it differently, with perhaps fewer surprises.  Some define it very carefully 
(ALGOL 68 comes to mind), some not so much.


C very explicitly leaves some thing undefined, supposedly to work with 
more machines, and Kernighan & Ritchie say that it is the responsibility 
of the programmer to create unambiguous code.
for example, evaluation of expressions in the lvalue might be done before 
OR after evaluation of expressions in th rvalue


Some other languages are much stricter on types, etc. and have fewer 
ambiguities.


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Paul Koning via cctalk



> On Aug 15, 2024, at 7:09 PM, Fred Cisin via cctalk  
> wrote:
> 
> On Thu, 15 Aug 2024, Paul Koning via cctalk wrote:
 I don't know about the VAX,but my gripe is the x86 and the 68000 don't
 automaticaly promote smaller data types to larger ones. What little
 programming I have done was in C never cared about that detail.
 Now I can see way it is hard to generate good code in C when all the
 CPU's are brain dead in that aspect.
> 
> It is not the hardware that is at fault.
> If anybody else is to blame, it is the compiler.

More likely the language designers, assuming the compiler doesn't have a 
standards violation in its code.  In the case of C, the type promotion rules 
that were just explained are rather bizarre and surprising.  Other languages do 
it differently, with perhaps fewer surprises.  Some define it very carefully 
(ALGOL 68 comes to mind), some not so much.

paul




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk




On 8/15/2024 3:41 PM, Sean Conner via cctalk wrote:

It was thus said that the Great ben via cctalk once stated:

I don't know about the VAX,but my gripe is the x86 and the 68000 don't
automaticaly promote smaller data types to larger ones. What little
programming I have done was in C never cared about that detail.
Now I can see way it is hard to generate good code in C when all the
CPU's are brain dead in that aspect.

char *foo, long bar;
... foobar = *foo + bar
  is r1 = foo
  r3 = * r1
  r2 = bar
  sex byte r3
  sex word r3
  r4 = r3 + r2
  foobar = r3
  what I want is
  bar = * foo + bar
nice easy coding.

   What CPUs did it correctly?  And how did they handle signed vs. unsigned
promotion?

unsigned char *ufoo;
unsigned long  ubar;

ufoobar = *ufoo + ubar;  //  *ufoo will be promited to an unsigned 
long, added to ubar and the result stored in ufoobar withouut any promotion or 
demotion (assuming ufoobar is an unsigned long)

signed char *foo;
signed long  bar;

foobar = *foo + bar;  //  *foo will be promoted to a long, added to bar 
and the result stored in foobar without any promotion or demotion (assuming 
foobar is a long)

   -spc




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Paul Koning via cctalk



> On Aug 15, 2024, at 1:54 PM, John via cctalk  wrote:
> 
> ...
> That said - and I have no idea whether this actually influenced
> anyone's decision for any system anywhere ever - one hard advantage of
> little-endian representation is that, if your CPU does arithmetic in
> serial fashion, you don't have to "walk backwards" to do it in the
> correct sequence.

It certainly did.  A storage startup I worked on had all its code targeted for 
a little endian machine, and when it came time to consider moving to other 
chips the availability of little endian mode was a major point.  We did briefly 
consider one big endian only chip, fortunately elected against it (PA Risc, 
which was acquired by Apple before they could ship their product).  So we 
stayed with MIPS, and I already mentioned some of the complications even with 
supposedely little endian capable devices.

paul



[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Fred Cisin via cctalk

On Thu, 15 Aug 2024, Paul Koning via cctalk wrote:

I don't know about the VAX,but my gripe is the x86 and the 68000 don't
automaticaly promote smaller data types to larger ones. What little
programming I have done was in C never cared about that detail.
Now I can see way it is hard to generate good code in C when all the
CPU's are brain dead in that aspect.


It is not the hardware that is at fault.
If anybody else is to blame, it is the compiler.

int8 A = -1;
uint8 B = 255;
/* Those have the same bit pattern! */
int16 X;
int16 Y;
X = A;
Y = B;
will X and Y have a bit patterns of    , or    

If you expect them to be "promoted", you are giving ambiguous instructions 
to the compiler.

The CPU isn't ever going to know.

THAT is why explicit typecasting is the way to go.

--
Grumpy Ol' Fred ci...@xenosoft.com


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk
When signed and unsigned values (variables or literals) of the same size 
are combined the compiler assumes that all of the values are signed.  
This can yield a problem if the unsigned integer is large enough that 
the most significant bit is set because this bit indicates sign.


for example:

uint8_t bValue = 128;
int8_t   bValue1 = -128

Both have the same value in memory (0x80).


On 8/15/2024 3:52 PM, Paul Koning via cctalk wrote:



On Aug 15, 2024, at 4:41 PM, Sean Conner via cctalk  
wrote:

It was thus said that the Great ben via cctalk once stated:

I don't know about the VAX,but my gripe is the x86 and the 68000 don't
automaticaly promote smaller data types to larger ones. What little
programming I have done was in C never cared about that detail.
Now I can see way it is hard to generate good code in C when all the
CPU's are brain dead in that aspect.

char *foo, long bar;
... foobar = *foo + bar
is r1 = foo
r3 = * r1
r2 = bar
sex byte r3
sex word r3
r4 = r3 + r2
foobar = r3
what I want is
bar = * foo + bar
nice easy coding.

  What CPUs did it correctly?  And how did they handle signed vs. unsigned
promotion?

unsigned char *ufoo;
unsigned long  ubar;

ufoobar = *ufoo + ubar;

signed char *foo;
signed long  bar;

foobar = *foo + bar;

  -spc

Obviously, "correctly" is in the eye of the beholder.  You can do size 
extension, signed or unsigned, on any computer.  How complicated it is depends on the 
machine.

For example, on VAX there are instructions for signed as well as unsigned 
promotion (CVTxy and MOVZxy respectively).  On PDP11, MOVB into a register does 
sign extension; unsigned promotion requires two instructions but that's no big 
deal.  And of course, promotion to bigger types requires multiple instructions 
either way since you're now dealing with multiple registers.

Unsigned promotion on a CDC 6600 is one instruction; signed requires three.

paul




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Mike Katz via cctalk
I'm afraid you might not understand how promotion works in C. Promotion 
has nothing do to with the word size of the machine it's running on.


Within the expression, all intermediate values and literals are promoted 
to the smallest type that can contain the largest value/literal in the 
expression.  As a general rule numeric literals (actual numbers typed 
in) are the default integer size of the machine (32 bits on most modern 
processors and compilers).  This can be changed with pragmas or command 
line options on most C compilers.


This may or may not be promoted or demoted to store the result in its 
final destination.


For example:

#include "stdint.h"

int foo( void )
{
    uint32_t Long1 = 10;
    uint32_t Long2 = 20;
    uint16_t Short1 = 10;
    unit16_t Short2 = 20;
    uint8_t Byte1 = 10;
    uint8_t Byte2 = 20;
    //
    //  Everything to the right of the equals is promoted to a uint32, 
the math will be performed and then the result will be truncated to a 
uint8_t when assigned.

    //
    //  This may also generate a compiler warning due to not 
typecasting the result on the right side of the equals.

    //
    Byte1 = Short1 + Long1;
    //
    //  Everything to the right of the equals will be promoted to a 
uint16, the math will be performed and then the result will be promoted 
to a uint32 when assigned

    //
    Long1 = Short1 + Byte1;
    //
    //  Everything to the right of the equals will not be promoted at 
all, the math will be performed and the result will be promoted to a 
uint16 when assigned.

    //
    Short1 = Byte1 + Byte2;

Generally numeric literals (actual numbers typed in) are the default 
integer size of the machine (32 bits on most modern processors and 
compilers).  This can be changed with pragmas or command line options on 
most C compilers.


I hope this clears things up?




On 8/15/2024 2:41 PM, ben via cctalk wrote:

On 2024-08-15 11:00 a.m., Paul Koning via cctalk wrote:

The short answer is "it's historic and manufacturers have done it in 
different ways".


You might read the original paper on the topic, "On holy wars and a 
plea for peace" by Danny Cohen (IEN-137, 1 april 1980): 
https://www.rfc-editor.org/ien/ien137.txt
Not reading the paper, I would say it is more the case having short 
data types (little) and FORTRAN packing 4 characters in word (big).


I don't know about the VAX,but my gripe is the x86 and the 68000 don't 
automaticaly promote smaller data types to larger ones. What little 
programming I have done was in C never cared about that detail.
Now I can see way it is hard to generate good code in C when all the 
CPU's are brain dead in that aspect.


char *foo, long bar;
... foobar = *foo + bar
 is r1 = foo
 r3 = * r1
 r2 = bar
 sex byte r3
 sex word r3
 r4 = r3 + r2
 foobar = r3
 what I want is
 bar = * foo + bar
nice easy coding.


And yes, different computers have used different ordering, not just 
characters-in-word ordering but bit position numbering. For example, 
very confusingly there are computers where the conventional numbering 
has the lowest bit number (0 or 1) assigned to the most significant 
bit.  The more common numbering of 0 for the LSB gives the property 
that setting bit n in a word produces the value 2^n, which is more 
convenient than, say, 2^(59-n).


Real computers are 2^36 from the 50's.
Big iron is the 60's. :)



paul








[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Paul Koning via cctalk



> On Aug 15, 2024, at 4:41 PM, Sean Conner via cctalk  
> wrote:
> 
> It was thus said that the Great ben via cctalk once stated:
>> 
>> I don't know about the VAX,but my gripe is the x86 and the 68000 don't 
>> automaticaly promote smaller data types to larger ones. What little 
>> programming I have done was in C never cared about that detail.
>> Now I can see way it is hard to generate good code in C when all the 
>> CPU's are brain dead in that aspect.
>> 
>> char *foo, long bar;
>> ... foobar = *foo + bar
>> is r1 = foo
>> r3 = * r1
>> r2 = bar
>> sex byte r3
>> sex word r3
>> r4 = r3 + r2
>> foobar = r3
>> what I want is
>> bar = * foo + bar
>> nice easy coding.
> 
>  What CPUs did it correctly?  And how did they handle signed vs. unsigned
> promotion?  
> 
>   unsigned char *ufoo;
>   unsigned long  ubar;
> 
>   ufoobar = *ufoo + ubar;
> 
>   signed char *foo;
>   signed long  bar;
> 
>   foobar = *foo + bar;
> 
>  -spc

Obviously, "correctly" is in the eye of the beholder.  You can do size 
extension, signed or unsigned, on any computer.  How complicated it is depends 
on the machine.

For example, on VAX there are instructions for signed as well as unsigned 
promotion (CVTxy and MOVZxy respectively).  On PDP11, MOVB into a register does 
sign extension; unsigned promotion requires two instructions but that's no big 
deal.  And of course, promotion to bigger types requires multiple instructions 
either way since you're now dealing with multiple registers.

Unsigned promotion on a CDC 6600 is one instruction; signed requires three.

paul

[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Sean Conner via cctalk
It was thus said that the Great ben via cctalk once stated:
> 
> I don't know about the VAX,but my gripe is the x86 and the 68000 don't 
> automaticaly promote smaller data types to larger ones. What little 
> programming I have done was in C never cared about that detail.
> Now I can see way it is hard to generate good code in C when all the 
> CPU's are brain dead in that aspect.
> 
> char *foo, long bar;
> ... foobar = *foo + bar
>  is r1 = foo
>  r3 = * r1
>  r2 = bar
>  sex byte r3
>  sex word r3
>  r4 = r3 + r2
>  foobar = r3
>  what I want is
>  bar = * foo + bar
> nice easy coding.

  What CPUs did it correctly?  And how did they handle signed vs. unsigned
promotion?  

unsigned char *ufoo;
unsigned long  ubar;

ufoobar = *ufoo + ubar;

signed char *foo;
signed long  bar;

foobar = *foo + bar;

  -spc


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread ben via cctalk

On 2024-08-15 11:00 a.m., Paul Koning via cctalk wrote:


The short answer is "it's historic and manufacturers have done it in different 
ways".

You might read the original paper on the topic, "On holy wars and a plea for 
peace" by Danny Cohen (IEN-137, 1 april 1980): 
https://www.rfc-editor.org/ien/ien137.txt
Not reading the paper, I would say it is more the case having short data 
types (little) and FORTRAN packing 4 characters in word (big).


I don't know about the VAX,but my gripe is the x86 and the 68000 don't 
automaticaly promote smaller data types to larger ones. What little 
programming I have done was in C never cared about that detail.
Now I can see way it is hard to generate good code in C when all the 
CPU's are brain dead in that aspect.


char *foo, long bar;
... foobar = *foo + bar
 is r1 = foo
 r3 = * r1
 r2 = bar
 sex byte r3
 sex word r3
 r4 = r3 + r2
 foobar = r3
 what I want is
 bar = * foo + bar
nice easy coding.


And yes, different computers have used different ordering, not just 
characters-in-word ordering but bit position numbering.  For example, very 
confusingly there are computers where the conventional numbering has the lowest 
bit number (0 or 1) assigned to the most significant bit.  The more common 
numbering of 0 for the LSB gives the property that setting bit n in a word 
produces the value 2^n, which is more convenient than, say, 2^(59-n).


Real computers are 2^36 from the 50's.
Big iron is the 60's. :)



paul






[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Paul Koning via cctalk



> On Aug 15, 2024, at 1:27 PM, Michael Thompson  
> wrote:
> 
> Danny Cohen, author of "On holy wars and a plea for peace", on the left and 
> me in the white shirt, taken in 2003.
> 
> MIPS CPUs can be configured by the hardware to run in either big-endian or 
> little-endian mode.

Indeed, though depending on the vendor, support for one of the modes may be 
marginal.

I remember evaluating the Raza (now Broadcom) XLR processor when it first came 
out.  Was told it supported little endian, which we needed.  Tried to configure 
the eval unit in little endian mode -- dead as a doornail.

Asked the rep.  Answer: "well, the *hardware* is designed to support it, but 
the power on boot configuration code is big endian only".  Oh.  Ended up 
spending a month or two converting fun stuff like DDR timing tuning loops to 
little endian.  It did eventually work, but no thanks to the people selling the 
device...

paul




[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread John via cctalk
> From: Peter Ekstrom 
> 
> I am tinkering with some C-code where I am working on something that
> can process some microcode. The microcode is from a DG MV/1
> machine and while working on it, I noticed it is in little-endian.
> That's simple enough to work around but that had me wondering, why do
> we have big and little endianness? What is the benefit of storing the
> low-order byte first? Or is that simply just an arbitrary decision
> made by some hardware manufacturers?

Mostly because hardware support for dividing a word into smaller chunks
(and addressing them individually) was something manufacturers added at
different times, on their own initiative, and there was no agreed-upon
way to do it. And since there are two obvious ways to turn a sequence
of X Y-bit chunks into a word of X * Y bits and neither one is exactly
"wrong," it ended up being a crapshoot as to whether manufacturers
would do it the one way, or the other.

(...or do something demented instead, like the PDP-11's "middle-endian"
approach to 32-bit values...)

And most of the debate probably came down to matters of taste; big-
endian is how we write things on paper, so it seems "natural" to most
people, while little-endian means that byte offset matches place value
(i.e. byte 0's value is multiplied by (256 ^ 0) = 1, byte 1's value by
(256 ^ 1) = 256, etc.,) so it seems "natural" to math types.

That said - and I have no idea whether this actually influenced
anyone's decision for any system anywhere ever - one hard advantage of
little-endian representation is that, if your CPU does arithmetic in
serial fashion, you don't have to "walk backwards" to do it in the
correct sequence.


[cctalk] Re: A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Paul Koning via cctalk



> On Aug 15, 2024, at 12:46 PM, Peter Ekstrom via cctalk 
>  wrote:
> 
> Hi to the group,
> 
> I am tinkering with some C-code where I am working on something that can
> process some microcode. The microcode is from a DG MV/1 machine and
> while working on it, I noticed it is in little-endian. That's simple enough
> to work around but that had me wondering, why do we have big and little
> endianness? What is the benefit of storing the low-order byte first? Or is
> that simply just an arbitrary decision made by some hardware manufacturers?
> 
> I am mostly just curious.
> 
> Thanks,
> Peter / KG4OKG

The short answer is "it's historic and manufacturers have done it in different 
ways".

You might read the original paper on the topic, "On holy wars and a plea for 
peace" by Danny Cohen (IEN-137, 1 april 1980): 
https://www.rfc-editor.org/ien/ien137.txt

And yes, different computers have used different ordering, not just 
characters-in-word ordering but bit position numbering.  For example, very 
confusingly there are computers where the conventional numbering has the lowest 
bit number (0 or 1) assigned to the most significant bit.  The more common 
numbering of 0 for the LSB gives the property that setting bit n in a word 
produces the value 2^n, which is more convenient than, say, 2^(59-n).

paul



[cctalk] A little off-topic but at least somewhat related: endianness

2024-08-15 Thread Peter Ekstrom via cctalk
Hi to the group,

I am tinkering with some C-code where I am working on something that can
process some microcode. The microcode is from a DG MV/1 machine and
while working on it, I noticed it is in little-endian. That's simple enough
to work around but that had me wondering, why do we have big and little
endianness? What is the benefit of storing the low-order byte first? Or is
that simply just an arbitrary decision made by some hardware manufacturers?

I am mostly just curious.

Thanks,
Peter / KG4OKG


[cctalk] Re: OFF TOPIC: Doctor Who

2024-04-24 Thread Chuck Guzis via cctalk
On 4/24/24 15:32, ben via cctalk wrote:
> https://www.youtube.com/watch?v=iJeu3LCo-6A
> Dr who ads for prime.

I think old Dr. Who shows are also on Pluto TV.

--Chuck (not a fan)




[cctalk] Re: OFF TOPIC: Doctor Who

2024-04-24 Thread ben via cctalk

https://www.youtube.com/watch?v=iJeu3LCo-6A
Dr who ads for prime.



[cctalk] OFF TOPIC: Doctor Who (was: Z80 vs other microprocessors of the time.

2024-04-24 Thread Fred Cisin via cctalk

On Wed, 24 Apr 2024, ben via cctalk wrote:

This would be great, but I live on the other side of the pond
and BBC anything is hard to find, let alone Micro's.
Where is my "Dr. Who".
Ben.


I was able, quite easily, to order DVDs from Amazon.co.uk.
That got me "Shada" (Doctor who written by Douglas Adams), and the 2023 
specials, LONG before they were released in USA.


--
Grumpy Ol' Fred ci...@xenosoft.com


[cctalk] Re: Slightly off topic --Places to go in Huntsville

2022-08-06 Thread Doc Shipley via cctalk

On 8/5/2022 4:39 PM, Will Cooke via cctalk wrote:

Next week I will be in the Huntsville, Al, USA area for an entire day with no 
commitments. Does anyone have recommendations on how to spend my day? I have 
been to the space and rocket museum several times. Any computer museums or 
displays, especially of space-related equipment? Any good surplus stores? All 
suggestions welcome.


Not knowing where you're from, it may be too familiar to be fun, but 
Huntsville is in some of the most beautiful country in the US.  It's at 
the foot of the Appalachian/Smokey/Blue Ridge complex, and driving 
northeast takes you out of the urban area PDQ.  Pack a lunch, fill the 
tank, go exploring.



Doc


[cctalk] Slightly off topic --Places to go in Huntsville

2022-08-05 Thread Will Cooke via cctalk
Next week I will be in the Huntsville, Al, USA area for an entire day with no 
commitments. Does anyone have recommendations on how to spend my day? I have 
been to the space and rocket museum several times. Any computer museums or 
displays, especially of space-related equipment? Any good surplus stores? All 
suggestions welcome.

Thanks,
Will

You don't understand anything until you learn it more than one way.
Marvin Minsky


Re: On compiling. (Was a way off topic subject)

2021-06-25 Thread Chuck Guzis via cctalk
On 6/25/21 3:31 AM, Kelly Fergason via cctalk wrote:


>> On Jun 25, 2021, at 4:54 AM, Gordon Henderson via cctalk 
>>  wrote:
>>
>> http://www.6502.org/source/interpreters/sweet16.htm#When_is_an_RTS_really_a_JSR_
>>
>> I initialiy used this "trick" in my own little bytecode VM but it's somewhat 
>> slower than some other methods, but as usual the trade-off is code-size vs. 
>> speed...

This "trick" can be performed on nearly any microprocessor with a stack
that keeps return addresses on said stack--and permits a program to push
data onto the stack.   Certainly x80 and x86 CPUs, where it isn't that
uncommon.

Interesting status returns can be implemented by adjusting the return
address on the stack in sort of a "reverse" computed goto; e.g.

sub_entry:
  add [stack top], status*jump instruction size
  return

...calling code...
   call sub_entry
   jmp  status_0
   jmp  status_1
   jmp  status_2
..etc.

Which saves the caller from having to perform multiple compares (or a
computed GOTO) on the status return.

On lower PIC microcontrollers, there is no way for a program to access
code space (i.e. Harvard architecture).  Static lookup tables represent
a concept requiring some thought.   Low PIC code memory uses 13 bit
words, while data memory uses 8.   Fortunately, there is an opcode,
RETLW,  that is "return from subroutine with 8 bit value in the W
register".   So one codes a table of RETLW xx instructions and performs
an indexed call into it.

--Chuck





Re: On compiling. (Was a way off topic subject)

2021-06-25 Thread Kelly Fergason via cctalk



> On Jun 25, 2021, at 4:54 AM, Gordon Henderson via cctalk 
>  wrote:
> 
> On Wed, 23 Jun 2021, Van Snyder via cctalk wrote:
> 
>>> On Wed, 2021-06-23 at 13:36 -0400, Paul Koning via cctalk wrote:
>>> Typical FORTH implementations are neat in that respect, since they
>>> use a threaded code encoding that allows for fast and efficient
>>> switching between threaded code (subroutine calls) and straight
>>> machine code.
>> 
>> I have a vague recollection of a story about a FORTH processor that put
>> the addresses of the functions to be executed on the return-address
>> stack (68000?) and then executed a RETURN instruction.
> 
> I saw this on the 6502 in Woz's Sweet-16 interpreter.
> 
> see e.g.
> 
> http://www.6502.org/source/interpreters/sweet16.htm#When_is_an_RTS_really_a_JSR_
> 
> I initialiy used this "trick" in my own little bytecode VM but it's somewhat 
> slower than some other methods, but as usual the trade-off is code-size vs. 
> speed...
> 
> Gordon

yeah standard 6502 trick to keep a jump table.  
kelly



Re: On compiling. (Was a way off topic subject)

2021-06-25 Thread Gordon Henderson via cctalk

On Wed, 23 Jun 2021, Van Snyder via cctalk wrote:


On Wed, 2021-06-23 at 13:36 -0400, Paul Koning via cctalk wrote:

Typical FORTH implementations are neat in that respect, since they
use a threaded code encoding that allows for fast and efficient
switching between threaded code (subroutine calls) and straight
machine code.


I have a vague recollection of a story about a FORTH processor that put
the addresses of the functions to be executed on the return-address
stack (68000?) and then executed a RETURN instruction.


I saw this on the 6502 in Woz's Sweet-16 interpreter.

see e.g.

http://www.6502.org/source/interpreters/sweet16.htm#When_is_an_RTS_really_a_JSR_

I initialiy used this "trick" in my own little bytecode VM but it's 
somewhat slower than some other methods, but as usual the trade-off is 
code-size vs. speed...


Gordon


Re: On compiling. (Was a way off topic subject)

2021-06-24 Thread Paul Koning via cctalk



> On Jun 24, 2021, at 1:02 AM, ben via cctalk  wrote:
> 
> On 2021-06-23 6:48 p.m., Paul Koning via cctalk wrote:
>> Somewhat related to the point of compiling and executing mixed together is a 
>> very strange hack I saw in the Electrologica assembler for the X8 (the 
>> company issue one, not one of the various ones built at various labs for 
>> that machine).  It is essentially a "load and go" assembler, so the code is 
>> dropped into memory as it is assembled, with a list of places to be fixed up 
>> rather than the more typical two pass approach.  You can use a variation of 
>> the usual "start address" directive to tell the assembler to start executing 
>> at that address right now.  In other words, you can assemble some code, 
>> execute it, then go back to assembling the rest of the source text.  Cute.  
>> Suppose you want to do something too hard for macros; just assemble its 
>> input data, followed by some code to convert that into the form you want, 
>> then go back to assembling more code.  And that can start by backing up the 
>> assembly output pointer ("dot") so the conversion code doesn't actually take 
>> up space in the finished program.
>> It sure makes cross-assemblers hard, because you have to include an EL-X8 
>> simulator in the assembler... :-)
>>  paul
> But at least it not a 386. Did any other computers have built in rom or 
> protected core used as rom for 'standard' routines like I/O or floating point.
> Ben.

Sure.  A few examples:

The Electrologica X1 (from 1958) has what one might call the first BIOS, in 
core ROM; the standard version (by E.W. Dijkstra, see his Ph.D. thesis) 
contains basic I/O services, a rudimentary assembler, and some operator 
interface mechanisms.  Customers could order additional ROM, and several did to 
add run time library routines for their compilers to the ROM, things like 
floating point operations since the hardware did only integer arithmetic.

The Electrologica X8 (1964) had an I/O coprocessor called CHARON which the main 
machine talks to via semaphores and queues; it does the detailed control of the 
various peripherals.  It either came with ROM or with read/write core loaded at 
the factory, I'm not sure.  It wasn't customer-programmable.

The IBM 360/44 (early 1970s) with the very obscure Emulator option implements 
an emulation of the string and decimal instructions of the 360 instruction set 
in an emulator that lives in a separate memory, not addressable from the normal 
execution environment.  It's read/write core memory, loaded (if it ever gets 
messed up, which I never saw happen) from a binary card deck using the 
"Emulator IPL" console button.

I assume the Apollo Guidance Computer (1968 or thereabouts) is an example since 
it has a substantial core ROM, and also I believe loadable programs, but I 
don't know the details.

There probably are quite a lot more but those are a few I know of.  

paul



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread ben via cctalk

On 2021-06-23 6:48 p.m., Paul Koning via cctalk wrote:

Somewhat related to the point of compiling and executing mixed together is a very strange hack I saw in the 
Electrologica assembler for the X8 (the company issue one, not one of the various ones built at various labs 
for that machine).  It is essentially a "load and go" assembler, so the code is dropped into memory 
as it is assembled, with a list of places to be fixed up rather than the more typical two pass approach.  You 
can use a variation of the usual "start address" directive to tell the assembler to start executing 
at that address right now.  In other words, you can assemble some code, execute it, then go back to 
assembling the rest of the source text.  Cute.  Suppose you want to do something too hard for macros; just 
assemble its input data, followed by some code to convert that into the form you want, then go back to 
assembling more code.  And that can start by backing up the assembly output pointer ("dot") so the 
conversion code doesn't actually take up space in the finished program.

It sure makes cross-assemblers hard, because you have to include an EL-X8 
simulator in the assembler... :-)

paul

But at least it not a 386. Did any other computers have built in rom or 
protected core used as rom for 'standard' routines like I/O or floating 
point.

Ben.



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Van Snyder via cctalk
On Wed, 2021-06-23 at 20:48 -0400, Paul Koning via cctalk wrote:
> In other words, you can assemble some code, execute it, then go back
> to assembling the rest of the source text.  Cute.  Suppose you want
> to do something too hard for macros; just assemble its input data,
> followed by some code to convert that into the form you want, then go
> back to assembling more code.

I proposed this for Fortran about twenty years ago, and for what became
Ada when it was just DoD\1 requirements in about 1976.



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Paul Koning via cctalk
Somewhat related to the point of compiling and executing mixed together is a 
very strange hack I saw in the Electrologica assembler for the X8 (the company 
issue one, not one of the various ones built at various labs for that machine). 
 It is essentially a "load and go" assembler, so the code is dropped into 
memory as it is assembled, with a list of places to be fixed up rather than the 
more typical two pass approach.  You can use a variation of the usual "start 
address" directive to tell the assembler to start executing at that address 
right now.  In other words, you can assemble some code, execute it, then go 
back to assembling the rest of the source text.  Cute.  Suppose you want to do 
something too hard for macros; just assemble its input data, followed by some 
code to convert that into the form you want, then go back to assembling more 
code.  And that can start by backing up the assembly output pointer ("dot") so 
the conversion code doesn't actually take up space in the finished program.

It sure makes cross-assemblers hard, because you have to include an EL-X8 
simulator in the assembler... :-)

paul



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Chuck Guzis via cctalk
On 6/23/21 2:18 PM, Paul Koning wrote:

> 
> I meant "reduce to machine language" (give or take threaded code or library 
> function calls).  It really doesn't seem to be any particular problem.  
> There's nothing about compilers that prevents them from being invoked in the 
> middle of an application.  (Come to think of it, isn't that what a "just in 
> time compiler" means?)
>

Yeah, come to think of it the "JIT compiler" idea did occur to me as
being the case.






Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread dwight via cctalk
How you'd do such in Forth depends on the threading method. You have Indirect 
threaded, direct threaded and call threaded. As you move to the right, they are 
faster and easier to add optimization but harder to deal with some of the 
higher level operations like Create Does> ( older Forth would be  
).
Dwight



From: cctalk  on behalf of Van Snyder via cctalk 

Sent: Wednesday, June 23, 2021 11:42 AM
To: cctalk@classiccmp.org 
Subject: Re: On compiling. (Was a way off topic subject)

On Wed, 2021-06-23 at 13:36 -0400, Paul Koning via cctalk wrote:
> Typical FORTH implementations are neat in that respect, since they
> use a threaded code encoding that allows for fast and efficient
> switching between threaded code (subroutine calls) and straight
> machine code.

I have a vague recollection of a story about a FORTH processor that put
the addresses of the functions to be executed on the return-address
stack (68000?) and then executed a RETURN instruction.



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Paul Koning via cctalk



> On Jun 23, 2021, at 5:02 PM, Chuck Guzis  wrote:
> 
> On 6/23/21 1:14 PM, Paul Koning wrote:
> 
>> I don't remember the details at this point, but I assume the "execute TECO 
>> macro" operation in the Stevens PDP-10 TECO compiler is done in that way.  
>> And of course these could keep the compiled code around to reuse if the 
>> source string hasn't changed.  A modern example of this technique is the 
>> regex library in Python, which lets you compile a regex string into a 
>> compiled regex object for later use, or lets you perform operations using 
>> the regex string directly.  The latter form caches a string -> compiled 
>> regex cache so doing this in a loop still performs reasonably well.
> 
> Could be the case of "what does "compile" mean?"  If the meaning is
> "reduce to machine language" maybe not.   Otherwise, if the meaning is
> "interpret", then maybe so.
> 
> Consider this paragraph of the tutorial at
> http://www.snobol4.org/docs/burks/tutorial/ch7.htm";
> 
> 7.7 RUN-TIME COMPILATION
> 
> The two functions described below are among the most esoteric features,
> not just of SNOBOL4, but of all programming languages in existence.
> While your program is executing, the entire SNOBOL4 compiler is just a
> function call away.
> ---
> 
> So maybe not rendering into machine code, but something else.
> 
> --Chuck

I meant "reduce to machine language" (give or take threaded code or library 
function calls).  It really doesn't seem to be any particular problem.  There's 
nothing about compilers that prevents them from being invoked in the middle of 
an application.  (Come to think of it, isn't that what a "just in time 
compiler" means?)

paul



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Peter Corlett via cctalk
On Wed, Jun 23, 2021 at 11:42:22AM -0700, Van Snyder via cctalk wrote:
[...]
> I have a vague recollection of a story about a FORTH processor that put
> the addresses of the functions to be executed on the return-address stack
> (68000?) and then executed a RETURN instruction.

I was initially going to say that doesn't sound right because m68k's JMP
instruction supports all useful EA modes and a PEA/RTS combination takes two
extra bytes and is slower than a plain JMP. But pushing *many* return
addresses is more plausible because each function will then magically call
each other in turn. I'm still not entirely convinced it'd be enough of a win
(if any) over a conventional run of JSR instructions. Perhaps it actually
misused RTM, which I never quite understood because Motorola's documentation
on modules is rather opaque and it's only available on the 68020 onwards.

This wheeze works on x86 too--and of course most other CPUs--but it can make
mincemeat of performance on (some) modern CPUs because caches assume that
CALL and RET are paired.

ROP (https://en.wikipedia.org/wiki/Return-oriented_programming) is an
interesting application of this technique, usually for nefarious purposes.



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Chuck Guzis via cctalk
On 6/23/21 1:14 PM, Paul Koning wrote:

> I don't remember the details at this point, but I assume the "execute TECO 
> macro" operation in the Stevens PDP-10 TECO compiler is done in that way.  
> And of course these could keep the compiled code around to reuse if the 
> source string hasn't changed.  A modern example of this technique is the 
> regex library in Python, which lets you compile a regex string into a 
> compiled regex object for later use, or lets you perform operations using the 
> regex string directly.  The latter form caches a string -> compiled regex 
> cache so doing this in a loop still performs reasonably well.

Could be the case of "what does "compile" mean?"  If the meaning is
"reduce to machine language" maybe not.   Otherwise, if the meaning is
"interpret", then maybe so.

Consider this paragraph of the tutorial at
http://www.snobol4.org/docs/burks/tutorial/ch7.htm";

7.7 RUN-TIME COMPILATION

 The two functions described below are among the most esoteric features,
not just of SNOBOL4, but of all programming languages in existence.
While your program is executing, the entire SNOBOL4 compiler is just a
function call away.
---

So maybe not rendering into machine code, but something else.

--Chuck


Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Paul Koning via cctalk



> On Jun 23, 2021, at 2:44 PM, Chuck Guzis via cctalk  
> wrote:
> 
> There are the languages that are otherwise nearly impossible to compile.
> 
> Consider SNOBOL4 (although there is a compiled version called SPITBOL,
> but without several hard-to-implement features).  One can construct
> statements at run time and execute them. A bit unusual back then, but
> not so much today.

That just means compiling it at the time the constructed statement is submitted 
for execution, then executing the generated code.  No problem so long as the 
compiler is available at run time.  PDP-10 TECO did that (it has the same 
feature, executing a string buffer full of commands).  So does Python, and I 
suspect it solves it the same way.

paul



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Chuck Guzis via cctalk
There are the languages that are otherwise nearly impossible to compile.

Consider SNOBOL4 (although there is a compiled version called SPITBOL,
but without several hard-to-implement features).  One can construct
statements at run time and execute them. A bit unusual back then, but
not so much today.

In a way, I'm a bit surprised that no version of BASIC (in my
experience) ever implemented this.

--Chuck


Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Van Snyder via cctalk
On Wed, 2021-06-23 at 13:36 -0400, Paul Koning via cctalk wrote:
> Typical FORTH implementations are neat in that respect, since they
> use a threaded code encoding that allows for fast and efficient
> switching between threaded code (subroutine calls) and straight
> machine code.

I have a vague recollection of a story about a FORTH processor that put
the addresses of the functions to be executed on the return-address
stack (68000?) and then executed a RETURN instruction.



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Paul Koning via cctalk



> On Jun 23, 2021, at 1:22 PM, Stan Sieler via cctalk  
> wrote:
> 
> Paul K got it right:
> "Any language can be interpreted or compiled.  For some languages, like
> LISP and TECO, interpreting is a rather natural implementation techniques,
> while for others (C, ALGOL) compilation is the obvious answer.  But either
> is possible."
> 
> A few quick notes...
> ...In some cases, we emulate a 16-bit wide CISC architecture (e.g., if you use
> the SPL construct "ASSEMBLE (...)", we compile it...into PA-RISC code
> emulating the old architecture).  It's still in use today, and can now emit
> either PA-RISC code or C source code (for a form of cross-compiling).

That's a bit like the machine code translation pioneered by DEC (MIPS to ALPHA) 
and subsequently used by Apple ("Rosetta") for PowerPC to Intel and now Intel 
to ARM (M1).  In a sense, those translators are compilers like all others, 
except for the rather unusual "source code" they accept as input.  Similar but 
somewhat different: the DEC Alpha assembler was a compiler front end, connected 
to the code generator shared with the (conventional) Alpha compilers.  It was 
rather amusing to say "macro/optimize" and see what the optimizer would do to 
your assembly code...

> What HP missed, and many people miss, is that any language can be
> compiled.  The main question one might ask is the degree of closeness to
> machine code that's emitted :)

I think it was Dijkstra who observed, in the context of the original EL-X1 
compiler, that you're generating code for a "machine" of your choosing.  When 
you see a runtime library call, it makes sense to think of that as a machine 
operation of a hypothetical machine different from the physical target 
hardware.  And function calls can be encoded as subroutine jump instructions, 
or in what is often more compact and nearly as fast, threaded code or P-code 
words.  Typical FORTH implementations are neat in that respect, since they use 
a threaded code encoding that allows for fast and efficient switching between 
threaded code (subroutine calls) and straight machine code.

On compiling. (Was a way off topic subject)

2021-06-23 Thread Stan Sieler via cctalk
Paul K got it right:
"Any language can be interpreted or compiled.  For some languages, like
LISP and TECO, interpreting is a rather natural implementation techniques,
while for others (C, ALGOL) compilation is the obvious answer.  But either
is possible."

A few quick notes...

Back around 1973, I wrote a compiler for InterLISP on the Burroughs B6700,
with the target code being  a new P-code invented just for LISP (by, I
think, Bill Gord, based on Peter Deutsch and Ken Bowles P-code work).
Yeah, some parts of the P-code machine had to invoke the interpreter, but
that's philosophically no different than the next note...

Around 1977/1978,  Hewlett-Packard released the source code for their COBOL
compiler for the HP 3000.  My friend looked at the source and said: every
statement compiles into a bunch of subroutine calls!
So, technicallyit was a compiler.  But, essentially no machine code was
emitted :)

In 1984, HP announced their PA-RISC systems (HP 3000 and HP 9000), and that
their ALGOL-like language, SPL, used by them and customers on the HP 3000,
would not be ported to PA-RISC (because "it wasn't possible").
We looked at it and said: we can.
And, we did (without the "subroutine call" mechanism :)
In some cases, we emulate a 16-bit wide CISC architecture (e.g., if you use
the SPL construct "ASSEMBLE (...)", we compile it...into PA-RISC code
emulating the old architecture).  It's still in use today, and can now emit
either PA-RISC code or C source code (for a form of cross-compiling).

What HP missed, and many people miss, is that any language can be
compiled.  The main question one might ask is the degree of closeness to
machine code that's emitted :)

Stan


Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-11 Thread Liam Proven via cctalk
On Tue, 10 Nov 2020 at 19:27, Angel M Alganza via cctalk
 wrote:

> Most of them, yes.  Then there is K-9 mail for Android,
> which almost makes me to not miss Mutt, when using the phone.

Which is what I proposed in the first reply, complete with links.

-- 
Liam Proven – Profile: https://about.me/liamproven
Email: lpro...@cix.co.uk – gMail/gTalk/gHangouts: lpro...@gmail.com
Twitter/Facebook/LinkedIn/Flickr: lproven – Skype: liamproven
UK: +44 7939-087884 – ČR (+ WhatsApp/Telegram/Signal): +420 702 829 053


Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-11 Thread Ali via cctalk
On November 11, 2020 8:42:09 AM PST, Todd Goodman via cctalk 
 wrote:
>On 11/11/2020 11:23 AM, Ali via cctalk wrote:
>>> If you want to write/reply to old-style plain-text email from a
>>> fondleslab, then use K9Mail. It is the only mobile client I know of
>>> that can handle bottom-posting, trimming quotes etc.
>>
>> Well K9 is getting a number of recs here so I will take a second look
>at it. I looked at it initially but then saw it hadn't been updated
>since 2018. However, looks like there is active working going on and a
>new version is slated for release sure (BETAs are available for
>evaluation).
>>
>> -Ali
>
>FWIW, I used to use K9 mail and liked it but it was crashing with a 
>large number of folders and emails in folders.
>
>I switched to Blue mail and it's worked well
>
>Todd

Ok. So as far as top posting goes it is a bit confusing. When you reply, and 
are composing your message, the original message is shown below the reply area. 
However, when the reply is sent the original message is on top of the reply.

In any case this is not the behavior I wanted. I would like to see the message 
quoted on top and then be able to inline edit the quoted text to reply to 
specific portion of the email. Is there an option to set this? 

I will try Blue Mail next. Thanks.

-Ali
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


RE: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-11 Thread Ali via cctalk
> 
> FWIW, I used to use K9 mail and liked it but it was crashing with a
> large number of folders and emails in folders.
> 
> I switched to Blue mail and it's worked well


Funny you say this; I just finished setting up K9 for my CCtalk email as a test 
case. Your message was the first one to arrive I hit reply all and the program 
crashed on me. A second attempt allows me to reply but it is quoting the 
original mail below my reply even though I have the option set for the quote to 
be on top.

So far not impressed but I am using the latest BETA so I am going to switch to 
the last stable release from 2018 and see if it works any better.

-Ali



Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-11 Thread Todd Goodman via cctalk

On 11/11/2020 11:23 AM, Ali via cctalk wrote:

If you want to write/reply to old-style plain-text email from a
fondleslab, then use K9Mail. It is the only mobile client I know of
that can handle bottom-posting, trimming quotes etc.


Well K9 is getting a number of recs here so I will take a second look at it. I 
looked at it initially but then saw it hadn't been updated since 2018. However, 
looks like there is active working going on and a new version is slated for 
release sure (BETAs are available for evaluation).

-Ali


FWIW, I used to use K9 mail and liked it but it was crashing with a 
large number of folders and emails in folders.


I switched to Blue mail and it's worked well

Todd



RE: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-11 Thread Ali via cctalk


> If you want to write/reply to old-style plain-text email from a
> fondleslab, then use K9Mail. It is the only mobile client I know of
> that can handle bottom-posting, trimming quotes etc.


Well K9 is getting a number of recs here so I will take a second look at it. I 
looked at it initially but then saw it hadn't been updated since 2018. However, 
looks like there is active working going on and a new version is slated for 
release sure (BETAs are available for evaluation).

-Ali



Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-10 Thread Angel M Alganza via cctalk
Hello,

On 11/10/20 3:45 AM, Liam Proven via cctalk wrote:

> Proper old-fashioned internet-standard email
> is totally unknown to the authors of modern email clients,
> such as for phones etc.

Most of them, yes.  Then there is K-9 mail for Android,
which almost makes me to not miss Mutt, when using the phone.

-- 
Ángel
O< http://www.asciiribbon.org/ campaign


Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-10 Thread Jason Howe via cctalk

On 11/10/20 3:45 AM, Liam Proven via cctalk wrote:



Proper old-fashioned internet-standard email is totally unknown to the
authors of modern email clients, such as for phones etc.


Hell, even Gmail borked the display of plain text emails a while back.

I started getting questions like, "What happened to the formatting of 
these auto-emails?"  Gmail.  Gmail happened.


--Jason


Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-10 Thread Liam Proven via cctalk
On Tue, 10 Nov 2020 at 11:31, Dave Wade G4UGM via cctalk
 wrote:
>
> That is like asking how do you fix Windows/10 MAIL app. It’s the default, it 
> sends and receives mail. If you want something that works better and gives 
> you control then you switch to a supported app.
> There is also Outlook and a GMAIL app for Samsung.

What Dave said.

Proper old-fashioned internet-standard email is totally unknown to the
authors of modern email clients, such as for phones etc.

No, you can't fix it.

If you want to write/reply to old-style plain-text email from a
fondleslab, then use K9Mail. It is the only mobile client I know of
that can handle bottom-posting, trimming quotes etc.

You can do it by hand with a lot of work in the Gmail client, but it
means manual selection and trimming etc. I have not found any way to
force plain text on mobile.

-- 
Liam Proven – Profile: https://about.me/liamproven
Email: lpro...@cix.co.uk – gMail/gTalk/gHangouts: lpro...@gmail.com
Twitter/Facebook/LinkedIn/Flickr: lproven – Skype: liamproven
UK: +44 7939-087884 – ČR (+ WhatsApp/Telegram/Signal): +420 702 829 053


RE: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-10 Thread Dave Wade G4UGM via cctalk
> -Original Message-
> From: cctalk  On Behalf Of Ali via cctalk
> Sent: 10 November 2020 00:28
> To: 'Liam Proven' ; 'General Discussion: On-Topic and
> Off-Topic Posts' 
> Subject: RE: Way off topic: posting to the list using default Samsung Android
> Mail Client
> 
> > > Any
> > > ideas/suggestions? TIA!
> >
> > https://k9mail.app/
> >
> > https://play.google.com/store/apps/details?id=com.fsck.k9&hl=en&gl=US
> >
> 
> I should have been more clear: any ideas on how can I fix the default email
> client (as it works very well for me aside from this one issue). :D
> 

That is like asking how do you fix Windows/10 MAIL app. It’s the default, it 
sends and receives mail. If you want something that works better and gives you 
control then you switch to a supported app.
There is also Outlook and a GMAIL app for Samsung.


> Thanks.
> 
> -Ali


Dave



RE: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-09 Thread Ali via cctalk
Fred,

> A WILD guess as to PART of what's causing it, . . .
> It may be defaulting to HTML.
> Is there a setting for HTML/plain-text?  (if so, it might still not
> process plain-text properly; many "developers" consider it to be
> beneath
> them to include real plain-text support)

It does not.  I think it looks at the MIME type of message to decide what to
do. What it may have to do is when I try to top post. I.e. if I try to reply
under the quoted original message then this seems to happen. However, if I
type above the quoted message it does not happen.

> Is the mail client sending directly, or is it relaying it through
> something else?
> 

It is sending directly through my SMTP server.

-Ali



Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-09 Thread Fred Cisin via cctalk

On Mon, 9 Nov 2020, Ali via cctalk wrote:

I am wondering if anyone else has tried using an Android Phone (a Note 10 in
my case) with the default Samsung email client to post to this list?
Whenever I post, even though the message is correctly formatted on my
device, all the CR/LF are removed from my messages. See below for an
example:


A WILD guess as to PART of what's causing it, . . . 
It may be defaulting to HTML.
Is there a setting for HTML/plain-text?  (if so, it might still not 
process plain-text properly; many "developers" consider it to be beneath 
them to include real plain-text support)


Many/most html processors ignore CRLF, considering that to be merely 
composing white-space, not part of the intended result.


When I was first trying to create raw HTML for my crude websites, I found 
that whatever whitespace I put into my raaw HTML was ignored, and I had 
to force breaks, extra space, tabs, etc.


Try putting in   (less than, b r , greater than)
See whether it gives us that literally, or puts in a break.

Is the mail client sending directly, or is it relaying it through 
something else?





RE: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-09 Thread Ali via cctalk
> > Any
> > ideas/suggestions? TIA!
> 
> https://k9mail.app/
> 
> https://play.google.com/store/apps/details?id=com.fsck.k9&hl=en&gl=US
> 

I should have been more clear: any ideas on how can I fix the default email 
client (as it works very well for me aside from this one issue). :D

Thanks.

-Ali



Re: Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-09 Thread Liam Proven via cctalk
On Tue, 10 Nov 2020 at 00:44, Ali via cctalk  wrote:

> Any
> ideas/suggestions? TIA!

https://k9mail.app/

https://play.google.com/store/apps/details?id=com.fsck.k9&hl=en&gl=US

-- 
Liam Proven – Profile: https://about.me/liamproven
Email: lpro...@cix.co.uk – gMail/gTalk/gHangouts: lpro...@gmail.com
Twitter/Facebook/LinkedIn/Flickr: lproven – Skype: liamproven
UK: +44 7939-087884 – ČR (+ WhatsApp/Telegram/Signal): +420 702 829 053


Way off topic: posting to the list using default Samsung Android Mail Client

2020-11-09 Thread Ali via cctalk
I am wondering if anyone else has tried using an Android Phone (a Note 10 in
my case) with the default Samsung email client to post to this list?
Whenever I post, even though the message is correctly formatted on my
device, all the CR/LF are removed from my messages. See below for an
example:

---
> Can it still be registered?>Is the Author find able? Do he still have 8"
floppies? > *** Will it NOT be lost in the mail with COVID 19 ***A deep
googlefu might find the author... just saying...;)
---


However, when I sent it, it looked like this:


> Can it still be registered?
>Is the Author find able? Do he still have 8" floppies? 
> *** Will it NOT be lost in the mail with COVID 19 ***

A deep googlefu might find the author... just saying...;)


Sending from trusty old Outlook 2007 on Win 7 works fine with the list and
emailing people back directly from the phone seems to work fine as well. It
is only when I am replying to the list that the issue occurs. Any
ideas/suggestions? TIA!

-Ali






Amiga Roots, TRIPOS - Off Topic, was Re: Exploring early GUIs

2020-09-22 Thread null via cctalk
Forking this thread as we are now way off the original and very cogent topic, 
which I would like to see continued. (Very valid to ask about good emulations 
of early GUI systems like Apollo, LispMs, PERQ, Xerox D* etc)

Peter’s mentions of TRIPOS (which was used on a Sage IV for Amiga Lorraine 
bring-up) has me renew my ask:

If anyone has media for TRIPOS for the Sage/Stride systems please reach out.

> On Sep 22, 2020, at 03:02, Peter Corlett via cctalk  
> wrote:
> 
> On Mon, Sep 21, 2020 at 11:29:14PM -0500, Richard Pope via cctalk wrote:
>> The Amiga 1000 with AmigaDos and Workbench was released in late 1985.
>> AmigaDos is based on Unix and Workbench is based on X-windows.
> 
> Er, no.
> 
> The Amiga's operating system is a pre-emptive multitasking microkernel which
> uses asynchronous message-passing betwen subsystems, which is not the Unix way
> of doing things at all. Unix provide libraries of synchronous procedure calls
> which block the caller until the job is done.
> 
> Although "AmigaDOS" appears prominently in the terminal as one boots 
> Workbench,
> that's only the filesystem and command-line shell. Due to time pressure, they
> bought in TRIPOS and filed off the serial number. TRIPOS is a fairly clunky
> thing written in BCPL that never sat well with the rest of the system, but it
> was quite probably the only DOS they could buy in which worked in a concurrent
> environment. TRIPOS is the reason why disks were slow on the Amiga.
> 
> The other bit that got reduced from a grander vision was the graphics, which
> became blocking libraries rather than device drivers. The window manager ran 
> as
> its own thread which gave the illusion of responsiveness.
> 
> The "X Window System" (not X-windows or other misnomers) is an ordinary[1] 
> Unix
> process which provides low-level primitives for a windowing system. 
> "Workbench"
> is just an ordinary AmigaDOS process which provides a file manager. You can
> even quit it to save memory, and the rest of the GUI still works. They are not
> the same thing or "based" on each other at all.
> 
> 
> [1] Well, some implementations are setuid root or have similar elevated
>privileges so they can have unfettered access to the bare metal and thus
>tantamount to being part of the kernel, but that's basically corner-cutting
>by a bunch of cowboys and it is possible to do this sort of thing properly
>without introducing a massive security hole.
> 


Re: Off topic ?

2020-08-26 Thread Paul Koning via cctalk



> On Aug 25, 2020, at 8:32 PM, Chris Elmquist  wrote:
> 
> On Tuesday (08/25/2020 at 04:36PM -0400), Paul Koning via cctalk wrote:
>> Not sure if this is off topic, but anyway..
>> 
>> There was also one with "tree" in its name, don't remember its full name and 
>> I think they shut down. 
> 
> Smalltree?  They are some former SGI guys here in MN,
> 
> https://small-tree.com/about-us/
> 
> -- 
> Chris Elmquist

Yes, that sounds right.  They don't seem to do iSCSI any longer. 

paul



Re: Off topic ?

2020-08-25 Thread Chris Elmquist via cctalk
On Tuesday (08/25/2020 at 04:36PM -0400), Paul Koning via cctalk wrote:
> Not sure if this is off topic, but anyway..
> 
> There was also one with "tree" in its name, don't remember its full name and 
> I think they shut down. 

Smalltree?  They are some former SGI guys here in MN,

https://small-tree.com/about-us/

-- 
Chris Elmquist


Re: Off topic ?

2020-08-25 Thread Paul Koning via cctalk
Not sure if this is off topic, but anyway..

I used Atto years ago, haven't in a long time.  Don't remember GlobalSAN.  
There was also one with "tree" in its name, don't remember its full name and I 
think they shut down. 

The odd thing is that Apple doesn't have one of its own.  Way back around 2005 
or so they were planning to; I may even have seen a beta of it but that may be 
a bad memory.  But nothing actually shipped.

Some searches turned up a few bits of info.  One is an open initiator on 
Github, and a note about it on a list that says turning it on requires 
disabling some security mechanisms because it's not signed.  Perhaps you have 
the ability to sign your own copy to avoid that.

Another is a product called KernSafe that says it has a free Mac OS iSCSI 
initiator (they also sell a target).  I know nothing about it, never tried it.

Finally, I found an announcement from GlobalSAN saying they now have Catalina 
support.  So perhaps "broken" has been fixed.

paul

> On Aug 25, 2020, at 4:15 PM, 821--- via cctalk  wrote:
> 
> I have a Mac mini os-x 10.15/16 11. 
> 
> I’m Really trying to find a working Iscsi Initiator 
> Software.  Yeah looked at atto 200 bucks
> GlobalSan broken. 
> 
> Who is using their Mac with an iScsi drive 
> Attached storage ? 
> 
> Help appreciated.  
> 
> K. 



Off topic ?

2020-08-25 Thread 821--- via cctalk
I have a Mac mini os-x 10.15/16 11. 

I’m Really trying to find a working Iscsi Initiator 
Software.  Yeah looked at atto 200 bucks
GlobalSan broken. 

Who is using their Mac with an iScsi drive 
Attached storage ? 

Help appreciated.  

K. 


Off topic- precision tooling

2019-10-04 Thread Paul Anderson via cctalk
I have a sizable quantity or tooling for sale or trade including :

circular blades, mostly Levin, 1 1/4 d, 1/4 arbor from .008 to 03 and
probably others.

drill bits- Levin. 13mm, .0028" etc.and 15 tubes only some labeled, B & D,
Cleveland decimal sets, Precision twist and other companies sizes 60
through over 100 or so..

Morris taps and dies, 0-80 through -160, about 20 sizes.

Most are new, but a few might be used.

If you have any interest, contact me off list.

If there enough interest I'll try to make a detailed list. They
 are a pain for me to work with, but cheap to ship.

I also have larger size taps, die , and bits up to 1 1/2 or so, I think a
#3 or #4 Morse taper

Thanks, Paul


off topic- Hitachi V-1950F(R) and Nicolet 3091storage scope available

2019-06-04 Thread Paul Anderson via cctalk
Both have manuals, and pics are available. Possibly a few Tek scopes also..

Please contact me off list with any questions or offers.

Thanks, Paul


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-08 Thread Pontus Pihlgren via cctalk
Hi again

Olafs also found this:
http://www.nedopc.org/forum/viewtopic.php?t=9778

Unless you know russian, maybe you can use google translate.

Regards,
Pontus.

On Tue, Jan 08, 2019 at 11:06:12AM +0100, Pontus Pihlgren via cctalk 
wrote:
> Hi Iain
> 
> I asked a guy from Latvia that I know, Olafs. He recognized the 
> transistors as KT315 A and B. Collector is middle pin.
> 
> https://en.wikipedia.org/wiki/KT315
> 
> He might also be able to help with spare lights, contact me off-list. 
> Unfortunately he has no documentation.
> 
> /P
> 
> On Sat, Jan 05, 2019 at 06:36:56PM +, Dr Iain Maoileoin via cctalk 
> wrote:
> > Off topic, but looking for help and/or wisdom.
> > 
> > If you visit https://www.scotnet.co.uk/iain/saratov 
> > <https://www.scotnet.co.uk/iain/saratov>/ <https://www.scotnet.co.uk/iain/> 
> > you will see some photos and wire-lists of work that I have started on the 
> > front panel of a Capatob 2.
> > 
> > I plan to get the switches and lights running on a blinkenbone board with a 
> > PDP8 emulation behind it.  (I already have an PDP11/70 front-panel running 
> > on the same infrastructure)
> > 
> > I have been struggling for over a year to get much info about this saratov 
> > computer (circuit diagrams etc).  So I have started the reverse engineering 
> > on the panel.
> > 
> > Does anybody know anything about this computer?  online or offline it would 
> > be much appreciated.
> > 
> > Iain


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-08 Thread Pontus Pihlgren via cctalk
Hi Iain

I asked a guy from Latvia that I know, Olafs. He recognized the 
transistors as KT315 A and B. Collector is middle pin.

https://en.wikipedia.org/wiki/KT315

He might also be able to help with spare lights, contact me off-list. 
Unfortunately he has no documentation.

/P

On Sat, Jan 05, 2019 at 06:36:56PM +, Dr Iain Maoileoin via cctalk 
wrote:
> Off topic, but looking for help and/or wisdom.
> 
> If you visit https://www.scotnet.co.uk/iain/saratov 
> <https://www.scotnet.co.uk/iain/saratov>/ <https://www.scotnet.co.uk/iain/> 
> you will see some photos and wire-lists of work that I have started on the 
> front panel of a Capatob 2.
> 
> I plan to get the switches and lights running on a blinkenbone board with a 
> PDP8 emulation behind it.  (I already have an PDP11/70 front-panel running on 
> the same infrastructure)
> 
> I have been struggling for over a year to get much info about this saratov 
> computer (circuit diagrams etc).  So I have started the reverse engineering 
> on the panel.
> 
> Does anybody know anything about this computer?  online or offline it would 
> be much appreciated.
> 
> Iain


Re: so far off topic - capatob - saratov2 computer Russsian pdp8?

2019-01-07 Thread Jon Elson via cctalk

On 01/07/2019 07:51 PM, allison via cctalk wrote:
I still want to make a stretched 8, PDP8 ISA with 16 bits 
and faster. No good reason save for it wold be fun.

Umm, I think that is called a Data General Nova!

Jon


Re: so far off topic - capatob - saratov2 computer Russsian pdp8?

2019-01-07 Thread allison via cctalk
On 01/07/2019 07:25 PM, ben via cctalk wrote:
> On 1/7/2019 8:20 AM, allison via cctalk wrote:
> snip...
>> made though more likely 74F, AS, or LS variant and of course CMOS 74ACT
>> (and cmos friends) as I just bought a bunch.  Dip is getting harder to
>> get but
>> the various SMT packages are easy.  Prices for 10 or more of a part are
>> cheap to cheaper from primary suppliers.  The second tier suppliers are
>> often several times that.
>
> I got ebay... The bottom of the heap.
>
>> I figure most of what I did back then is years before many here were
>> born.
>>
>> However I have enough NOS TTL 74LS, 74AS, 74F series to build several
>> machines.
>
> I have been playing around with a early 70's TTL computer design
> and 74LS181's are too slow by 30 ns. Using a BLACK BOX model for core
> memory, I can get a 1.2us memory cycle using a 4.912 MHz raw clock
> but I need a few 74Hxx's in there. Proms are 256x4 60 ns and 32x8 50 ns.
>
> Do you have your 74Hxx spares? Eastern Europe still  has a few on ebay
> with reasonable shipping for 100% American Russian parts.
>
No use for 74H parts though I have a bunch.

the 74LS are slow  you are paying for lower power with speed.  tHe 74181
and 74S181 were far faster.

Proms are small and slow, last time I used them was for the address
decode used on the Northstar* MDS-A controller.

I built the last big machine with ram back 1980 and was in the 1us
instruction
cycle time for single cycle instructions without pipelines.  Core was never
considered.  Trick is throw hardware at it.  Adding adders to the address
calculation rather than reuse the ALU saves a lot of time and wires.   Not
like it was for manufacture or anything like that.  More of an exercise.

I still want to make a stretched 8, PDP8 ISA with 16 bits and faster.
No good reason save for it wold be fun.
>> I'm still building, current project is a very compact Z80 CP/M system
>> using CF
>> for disk. Mine uses all Zilog CMOS for very low power.  Its a variant of
>> the
>> Grant Searle Z80 with memory management added to utilize all of the
>> 124k ram and eeprom.  If you want go look there.
>
> What do you use all that memory for?
>
CP/M the allocation block store for each drive and deblocking buffers
for performance
can be large plus its easy to hide part of the Bios in banked ram. 
Background processes
are easier when you have lots of ram for that.  Most of the larger aps
like C compilers
and such run better with more than 48K, 56k is easy, and 60k is doable
with the
right memory map.

For EEprom its more than boot, the system is in EEprom (about 8K) and
with 32K
or more things like romdisks and utilities are easily parked there.

I've been building nonstandard CP/M systems since 79.  In all cases he
aps think
it is standard CP/M but the bios and such have been tuned even CP/M Bdos
it self.
Though I often use ZRdos or ZSdos as they are very good.  Not much you
cant do
to it.

Allison
>>
>
> The Chinese elves have been busy, My 5V 15 amp $20 power supply arrived
> in the mail today. I have power to spare for my BUS and blinking lights.
>
So long as you load it at least 10% it will be good.

> Ben.
>
>



Re: so far off topic - capatob - saratov2 computer Russsian pdp8?

2019-01-07 Thread ben via cctalk

On 1/7/2019 8:20 AM, allison via cctalk wrote:
snip...

made though more likely 74F, AS, or LS variant and of course CMOS 74ACT
(and cmos friends) as I just bought a bunch.  Dip is getting harder to
get but
the various SMT packages are easy.  Prices for 10 or more of a part are
cheap to cheaper from primary suppliers.  The second tier suppliers are
often several times that.


I got ebay... The bottom of the heap.


I figure most of what I did back then is years before many here were born.

However I have enough NOS TTL 74LS, 74AS, 74F series to build several
machines.


I have been playing around with a early 70's TTL computer design
and 74LS181's are too slow by 30 ns. Using a BLACK BOX model for core 
memory, I can get a 1.2us memory cycle using a 4.912 MHz raw clock

but I need a few 74Hxx's in there. Proms are 256x4 60 ns and 32x8 50 ns.

Do you have your 74Hxx spares? Eastern Europe still  has a few on ebay
with reasonable shipping for 100% American Russian parts.


I'm still building, current project is a very compact Z80 CP/M system
using CF
for disk. Mine uses all Zilog CMOS for very low power.  Its a variant of
the
Grant Searle Z80 with memory management added to utilize all of the
124k ram and eeprom.  If you want go look there.


What do you use all that memory for?


Allison



The Chinese elves have been busy, My 5V 15 amp $20 power supply arrived
in the mail today. I have power to spare for my BUS and blinking lights.

Ben.




Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-07 Thread Jon Elson via cctalk

On 01/06/2019 11:24 PM, Dave Wade via cctalk wrote:


I am also pretty sure that prior to S/360 the term 
"character" was generally used for non 8-bit character 
machines. I am not familiar with the IBM 70xx series machines
The IBM 7070 (business machine) was a word-addressed 
machine, but all decimal.
The IBM 709x series (scientific machine) was also word 
addressed, but binary.



  I seem to recall that some IBM machines also had 
facilities to read all 9 bits from a 9-track tape as data 
so 9-bit bytes but I can't find references. I also feel 
the use of the term Octet was more marketing to distance 
ones machines from IBM. Dave 
The earlier machines were mostly using 7 track tape, not 9 
track. You did have your choice of even or odd parity.  I'm 
pretty sure that the 360 tape controls did not support any 
handling of the 9th track other than parity, and odd parity 
was the only option.


Jon


Re: so far off topic - capatob - saratov2 computer Russsian pdp8?

2019-01-07 Thread allison via cctalk
On 01/07/2019 09:51 AM, Peter Corlett via cctalk wrote:
> On Sun, Jan 06, 2019 at 02:54:08PM -0700, ben via cctalk wrote:
>> On 1/6/2019 12:24 PM, allison via cctalk wrote:
>>> The small beauty of being there...   FYI back then (1972) a 7400 was about
>>> 25 cents and 7483 adder was maybe $1.25.  Least that's what I paid.
>> Checks my favorite supplier.
>> $1.25 for 7400 and $4.00 for a 7483.
>> It has gone up in price.
> Thanks to inflation, $0.25 in 1972 is worth $1.51 now. Likewise, $1.25 has
> inflated to $7.54. So they're cheaper in real terms than they used to be.
>
> However, it's still not entirely comparable, as I suspect nobody's making
> 74-series chips any more so you're buying NOS. A modern equivalent would be a
> microcontroller, which start at well under a dollar.
>
First I wasn't guessing back.  I was building and buying back then. So
that was what I
actually paid in 1972,  I've been at it since RTL hit the streets.   The
74 series still
made though more likely 74F, AS, or LS variant and of course CMOS 74ACT
(and cmos friends) as I just bought a bunch.  Dip is getting harder to
get but
the various SMT packages are easy.  Prices for 10 or more of a part are
cheap to cheaper from primary suppliers.  The second tier suppliers are
often several times that.

I figure most of what I did back then is years before many here were born.

However I have enough NOS TTL 74LS, 74AS, 74F series to build several
machines. 

I'm still building, current project is a very compact Z80 CP/M system
using CF
for disk. Mine uses all Zilog CMOS for very low power.  Its a variant of
the
Grant Searle Z80 with memory management added to utilize all of the
124k ram and eeprom.  If you want go look there.

Allison





Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-07 Thread Kyle Owen via cctalk
On Mon, Jan 7, 2019 at 8:51 AM Peter Corlett via cctalk <
cctalk@classiccmp.org> wrote:

> Thanks to inflation, $0.25 in 1972 is worth $1.51 now. Likewise, $1.25 has
> inflated to $7.54. So they're cheaper in real terms than they used to be.
>
> However, it's still not entirely comparable, as I suspect nobody's making
> 74-series chips any more so you're buying NOS. A modern equivalent would
> be a
> microcontroller, which start at well under a dollar.
>

Logic chips still have their uses, and are most certainly still being made.
You can still get 74LS parts, in a DIP package even:
https://www.digikey.com/product-detail/en/texas-instruments/SN74LS00N/296-1626-5-ND/277272

Note: it's an active production part, too.

Kyle


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-07 Thread Peter Corlett via cctalk
On Sun, Jan 06, 2019 at 02:54:08PM -0700, ben via cctalk wrote:
> On 1/6/2019 12:24 PM, allison via cctalk wrote:
>> The small beauty of being there...   FYI back then (1972) a 7400 was about
>> 25 cents and 7483 adder was maybe $1.25.  Least that's what I paid.
> Checks my favorite supplier.
> $1.25 for 7400 and $4.00 for a 7483.
> It has gone up in price.

Thanks to inflation, $0.25 in 1972 is worth $1.51 now. Likewise, $1.25 has
inflated to $7.54. So they're cheaper in real terms than they used to be.

However, it's still not entirely comparable, as I suspect nobody's making
74-series chips any more so you're buying NOS. A modern equivalent would be a
microcontroller, which start at well under a dollar.



Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-07 Thread Paul Koning via cctalk



> On Jan 7, 2019, at 12:24 AM, Dave Wade via cctalk  
> wrote:
> 
> ...
> I am also pretty sure that prior to S/360 the term "character" was generally 
> used for non 8-bit character machines. I am not familiar with the IBM 70xx 
> series machines but certainly on the 1401 and 1620 the term byte was never 
> used.

The 1620 is a decimal machine, with digit-addressed memory.  It has a number of 
instructions that operate on digit pairs, for I/O, so those pairs are called 
"characters".

> Also the Honeywell H3200 which was an IBM1401 "clone" (sort of). The only 
> machine I know where a "byte" is not eight bits is the Honeywell L6000 and 
> its siblings These machines had 36 bit works which were originally divided 
> into 6 six bit characters. 

Others have already pointed out there are plenty of other examples, with other 
definitions.  I mentioned the CDC 6000 series mainframes.

Just to make sure of my memory, I searched some documentation.  Here is a quote 
from the CDC Cyber 170 series Hardware Reference Manual (section "Input/output 
multiplexor - Model 176"):

"During communications between the PPUs and CM, the I/O MUX disassembles 60-bit 
transmissions from CM to 12-bit bytes."

But here's one I had not seen before: in the 7600 Preliminary System 
Description, the section that describes the PPU I/O machinery has the same sort 
of wording as above, but then on the next page the discussion of the drum 
memory says:

"A 16 bit cyclic parity byte is generated by the controller for the data field 
of each record written on the peripheral unit."

And the CDC 6000 series Sort-Merge utility has a "BYTESIZE" control card, which 
in PDP-10 fashion allows "byte" to be any length up to 60 bits (the word size) 
-- the default is 6 bits, which is character length for the basic character set 
but other examples show 12 and 60 bit "bytes".  In the same way, a TUTOR 
language manual from 1978 describes bytes as being any size, in a description 
of the language feature for what C calls bit-field variables.  I didn't realize 
that term was used for that feature, though.

paul



Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-07 Thread Noel Chiappa via cctalk
> From: Dave Wade

> The only machine I know where a "byte" is not eight bits is the
> Honeywell L6000 and its siblings

I'm not sure why I bother to post to this list, since apparently people don't
bother to read my messages.

>From the "pdp10 reference handbook", 1970, section 2.3, "Byte Manipulation",
page 2-15:

"This set of five instructions allows the programmer to pack or unpack bytes
of any length from anywhere within a word. ... The byte manipulation
instructions have the standard memory reference format, but the effective
address E is used to retrieve a pointer, which is used in turn to locate
the byte ... The pointer has the format

 0   5 6   11 12 13 14   17 18   35
   P S   I X   Y

where S is the size of the byte as a number of bits, and P as its position
as the number of bits remaining at the right of the byte in the word ... To
facilitate processing a series of bytes, several of the byte instructions
increment the pointer, ie modify it so that it points to the next byte
position in a set of memory locations. Bytes are processed from left to
right in a word, so incrementing merely replaces the current value of P
by P-S, unless there is insufficient space in the present location [i.e.
'word' - JNC] for another byte of the specified size (P-S < 0). In this
case Y is increased by one to point at the next consecutive location, and
P is set to 36 - S to point to the first byte at the left in the new
location."

Now imagine implementing all that in FLIP CHIPs which held transistors
(this is before ICs)!

Anyway, like I said, at least ITS (of the PDP-10 OS's) used this to store
ASCII in words which contain five 7-bit _bytes_. I don't know if TENEX did.


> I also feel the use of the term Octet was more marketing to distance
> ones machines from IBM.

Huh? Which machine used the term 'octet'?

Like I said, we adapted and used the term 'octet' in TCP/IP documentation
(and that's definite - go check out historical documents, e.g. RFC-675 from
1974) because 'byte' was at the time ambiguous - the majority of machines on
the ARPANET at that point were PDP-10's (see above).

Interestingly, I see it's not defined in that document (or in the earlier
RFC-635), so it must have already been in use for an 8-bit quantity?

Doing a little research, there is a claim that Bob Bemer independently
invented the term in 1965/66. Perhaps someone subconciously remembered his
proposal, and that's the ultimate source? The term is also long used in
chemistry and music, of course, so perhaps that's where it came from.

Noel


RE: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-06 Thread Dave Wade via cctalk



> -Original Message-
> From: cctalk  On Behalf Of William Donzelli
> via cctalk
> Sent: 06 January 2019 23:21
> To: Bob Smith ; General Discussion: On-Topic and
> Off-Topic Posts 
> Subject: Re: off topic - capatob - saratov2 computer Russsian pdp8
> 
> > With the advent of wide spread introduction of 16 bit machines the
> > definition of a byte as an 8 bit unit was accepted because ASCII
> > supported character sets for multiple languages, before the 8bit
> > standard there were 6 bit, 7 bit variations of he character sets.
> > Gee, what were teletypes, like the model 15, 19, 28, oh yeah 5 level
> > or 5 bit..with no parity.
> 
> Byte was more or less "set in stone" in the mid 1960s, with the success of the
> IBM System/360. During the internal war at IBM to determine whether the
> S/360 was going to be a 6 bit based machine or an 8 bit based machine, a
> study showed that a huge majority of the stored digital data in the world was
> better suited to 8 bits (mainly because of BCD in the financial industry). It 
> had
> nothing to do with terminal communications, as there just was not much of
> that back then.
> When the S/360 turned into the success it was, maybe 1966 or so, it turned
> into an eight bit byte world.
> 
> People on this list keep forgetting just how gigantic IBM was back then, and
> how much influence it had, good or bad.
> 
> --
> Will

I am also pretty sure that prior to S/360 the term "character" was generally 
used for non 8-bit character machines. I am not familiar with the IBM 70xx 
series machines but certainly on the 1401 and 1620 the term byte was never 
used. Also the Honeywell H3200 which was an IBM1401 "clone" (sort of). The only 
machine I know where a "byte" is not eight bits is the Honeywell L6000 and its 
siblings These machines had 36 bit works which were originally divided into 6 
six bit characters. When it became clear that the world was moving to 8-bit 
characters they added new instructions that allowed a word to be treated as 4 
by 9-bit bytes.

I seem to recall that some IBM machines also had facilities to read all 9 bits 
from a 9-track tape as data so 9-bit bytes but I can't find references.

I also feel the use of the term Octet was more marketing to distance ones 
machines from IBM.

Dave



Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-06 Thread Guy Sotomayor Jr via cctalk


> On Jan 6, 2019, at 6:10 PM, Jon Elson via cctalk  
> wrote:
> 
> On 01/06/2019 01:29 PM, Bob Smith via cctalk wrote:
>> Sorry, thanks for playing but
>> Actually half of a WORD is a BYTE, whatever the numerical length is.
>> Ready for this,half of a BYTE is a NIBBLE.
> Well, no.  On 32-bit machines such as IBM 360, VAX, etc. half a 32-bit word 
> is a halfword,
> the fullword is equal to FOUR bytes.  On a 360/65 and above, the memory word 
> was 64 bits, or a double-word, so half that was a fullword.  Just makes it 
> more confusing.

No it doesn’t.  The 360/65 was still a 32-bit processor (as defined by the 
ISA).  It makes no difference what the width to memory was.  Wider memory is 
only to improve the bandwidth to memory.  That’s like saying the current Intel 
ixxx CPUs (which are 64-bit ISA) are “confusing” because the width to memory is 
256-bits.

TTFN - Guy



Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-06 Thread Jon Elson via cctalk

On 01/06/2019 01:29 PM, Bob Smith via cctalk wrote:

Sorry, thanks for playing but
Actually half of a WORD is a BYTE, whatever the numerical length is.
Ready for this,half of a BYTE is a NIBBLE.
Well, no.  On 32-bit machines such as IBM 360, VAX, etc. 
half a 32-bit word is a halfword,
the fullword is equal to FOUR bytes.  On a 360/65 and above, 
the memory word was 64 bits, or a double-word, so half that 
was a fullword.  Just makes it more confusing.


Jon


Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-06 Thread William Donzelli via cctalk
> With the advent of wide spread introduction of 16 bit machines the
> definition of a byte as an 8 bit unit was accepted because ASCII
> supported character sets for multiple languages, before the 8bit
> standard there were 6 bit, 7 bit variations of he character sets.
> Gee, what were teletypes, like the model 15, 19, 28, oh yeah 5 level
> or 5 bit..with no parity.

Byte was more or less "set in stone" in the mid 1960s, with the
success of the IBM System/360. During the internal war at IBM to
determine whether the S/360 was going to be a 6 bit based machine or
an 8 bit based machine, a study showed that a huge majority of the
stored digital data in the world was better suited to 8 bits (mainly
because of BCD in the financial industry). It had nothing to do with
terminal communications, as there just was not much of that back then.
When the S/360 turned into the success it was, maybe 1966 or so, it
turned into an eight bit byte world.

People on this list keep forgetting just how gigantic IBM was back
then, and how much influence it had, good or bad.

--
Will


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-06 Thread William Donzelli via cctalk
> - some marketing person made it up

You believed them? Have your head examined.

> - they were only counting things that were general-purpose (i.e. came with
>   mass storage and compilers)

Conditions, conditions.

> - they didn't consider micros as "computers" (many were used in things like
>   printers, etc, and were not usable as general-purpose computers)

Well, that is DECish, ignoring the coming tsunami of micros. Wow, did
they pay the price...

--
Will


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-06 Thread Noel Chiappa via cctalk
> From: William Donzelli

>> in 1980, there were more PDP-11's, world-wide, than any other kind of
>> computer.

> I bet the guys at Zilog might have something to talk to you about.

I was quoting my memory of a DEC ad in the WSJ, which now that I go check,
says the -11 was "the best-selling computer in the world" (the ad was in
1980). There are a number of possible explanations as to why it makes this
claim:

- some marketing person made it up
- they were only counting things that were general-purpose (i.e. came with
  mass storage and compilers)
- they didn't consider micros as "computers" (many were used in things like
  printers, etc, and were not usable as general-purpose computers)

Etc, etc.

 Noel


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-06 Thread ben via cctalk

On 1/6/2019 12:24 PM, allison via cctalk wrote:


The small beauty of being there...   FYI back then (1972) a 7400 was
about 25 cents
and 7483 adder was maybe $1.25.  Least that's what I paid.

Checks my favorite supplier.

$1.25 for 7400 and $4.00 for a 7483.
It has gone up in price.

Allison


Ben.





Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-06 Thread Paul Koning via cctalk



> On Jan 6, 2019, at 2:34 PM, Bob Smith via cctalk  
> wrote:
> 
> With the advent of wide spread introduction of 16 bit machines the
> definition of a byte as an 8 bit unit was accepted because ASCII
> supported character sets for multiple languages, before the 8bit
> standard there were 6 bit, 7 bit variations of he character sets.
> Gee, what were teletypes, like the model 15, 19, 28, oh yeah 5 level
> or 5 bit..with no parity.

I think some of this discussion suffers from not going far enough back in 
history.

"Byte" was a term used a great deal in the IBM/360 series, where it meant 8 
bits.  Similarly "halfword" (16 bits).  But as was pointed out, mainframes in 
that era had lots of different word sizes: 27, 32, 36, 48, 60...  Some of them 
(perhaps not all) also used the term "byte" to mean something different.  In 
the PDP-10, it has a well defined meaning: any part of a word, as operated on 
by the "byte" instructions -- which the VAX called "bit field instructions".  6 
and 9 bit sizes were common for characters, and "byte" without further detail 
could have meant any of those.  In the CDC 6000 series, characters were 6 or 12 
bits, and either of those could be "byte".

"Nybble" is as far as I can tell a geek joke term, rather than a widely used 
standard term.  "Halfword" is 16 bits on IBM 360 and VAX, 18 on PDP-10, and 
unused on CDC 6000.  Then there are other subdivisions with uncommon terms, 
like "parcel" (15 bits, CDC 6000 series, the unit used by the instruction issue 
path).

ASCII was originally a 7 bit code.  There were other 7 bit codes at that time, 
like the many variations of Flexowriter codes; 6 bit codes (found in 
typesetting systems and related stuff such as news wire service data feeds), 
and 5 bit codes (Telex codes, again in many variations).

paul



Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-06 Thread allison via cctalk
On 01/06/2019 01:54 PM, William Donzelli via cctalk wrote:
>> And then the PDP-11 put the nail in that coffin (and in 1980, there were more
>> PDP-11's, world-wide, than any other kind of computer).
> I bet the guys at Zilog might have something to talk to you about.
>
> --
> Will
And Intel!  8008 and 8080 was a byte machine as was 8085, z80,  8088,
6800, 6502, and a long list to follow.

The PDP-11 was unique that it was 8/16 bit in that memory (and by
default IO) supported both byte and word
reads and write.   Instructions were 16bit but data was byte word.  

There were more  Z80 based machines (TRS-80 alone exceeded 250,000) than
PDP-11.
History guys, we are about history!

Allison




Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-06 Thread Bob Smith via cctalk
With the advent of wide spread introduction of 16 bit machines the
definition of a byte as an 8 bit unit was accepted because ASCII
supported character sets for multiple languages, before the 8bit
standard there were 6 bit, 7 bit variations of he character sets.
Gee, what were teletypes, like the model 15, 19, 28, oh yeah 5 level
or 5 bit..with no parity.

On Sun, Jan 6, 2019 at 2:29 PM Bob Smith  wrote:
>
> Sorry, thanks for playing but
> Actually half of a WORD is a BYTE, whatever the numerical length is.
> Ready for this,half of a BYTE is a NIBBLE. In fact, in common usage,
> word has become synonymous with 16 bits, much like byte has with 8
> bits.
> What's the difference between a word and byte? - Stack Overflow
> https://stackoverflow.com/questions/.../whats-the-difference-between-a-word-and-byte
> Feedback
> About this result
>
> On Sun, Jan 6, 2019 at 1:48 PM Jeffrey S. Worley via cctalk
>  wrote:
> >
> > On Sun, 2019-01-06 at 12:00 -0600, cctalk-requ...@classiccmp.org wrote:
> > > Re: off topic - capatob - saratov2 computer Russsian pdp8
> >
> > Nothing has changed as regards the number of bits in a byte, a nybble
> > is 4 bits, 8 to the byte, and x to the word - this last varies widely
> > depending on architecture.
> >
> > Still, in Spirit, on an octal processor a whole number is a six bit
> > 'byte', so the term is appropriate, especially to avoid confusion with
> > the word size of two six bit 'bytes'.
> >
> > Fun.
> >
> > Jeff
> >


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-06 Thread allison via cctalk
On 01/06/2019 02:08 PM, Grant Taylor via cctalk wrote:
> On 1/6/19 11:25 AM, Guy Sotomayor Jr via cctalk wrote:
>> I think it’s also telling that the IETF uses the term octet in all of
>> the specifications to refer to 8-bit sized data.  As “byte” (from
>> older machines) could be anything and is thus somewhat ambiguous.
>>
>> It *may* have been the IBM 360 that started the trend of Byte ==
>> 8-bits as the 360’s memory (in IBM’s terms) was byte addressable and
>> the instructions for accessing them were “byte” instructions (as
>> opposed to half-word and word instructions).
>
Yes it was.

Machines around them and in that time frame (mainframe) were 12, 18, 36,
60 bit words.

The big break was mid 1970s with micros first 8008, 8080, 6800 and
bigger machines
like PDP11 (did byte word reads and writes) and TI990.

The emergence of VAX and other 32bit machines made 8bit common as
terminal IO was
starting to standardize.

> Thank you for the clarification.
>
> My take away is that before some nebulous point in time (circa IBM's
> 360) a "byte" could be a number of different bits, depending on the
> computer being discussed.  Conversely, after said nebulous point in
> time a byte was standardized on 8-bits.
>
> Is that fair and accurate enough?  -  I'm wanting to validate the
> patch before I apply it to my mental model of things.  ;- 

There is no hard before and after as systems like DEC10 and other
persisted for a while.  Also part of it was IO codes for the
EBDIC, Flexowriter, ASr33 (8level vs Baudot), and CRT terminals emerging
with mostly IBM or ANSI.

I am somewhat DEC and personal computer (pre IBM PC) centric on this as
they were he machines I got to see
and work with that were not in rooms with glass and white coated
specialists.

Allison





Re: off topic - capatob - saratov2 computer Russsian pdp8

2019-01-06 Thread Bob Smith via cctalk
Sorry, thanks for playing but
Actually half of a WORD is a BYTE, whatever the numerical length is.
Ready for this,half of a BYTE is a NIBBLE. In fact, in common usage,
word has become synonymous with 16 bits, much like byte has with 8
bits.
What's the difference between a word and byte? - Stack Overflow
https://stackoverflow.com/questions/.../whats-the-difference-between-a-word-and-byte
Feedback
About this result

On Sun, Jan 6, 2019 at 1:48 PM Jeffrey S. Worley via cctalk
 wrote:
>
> On Sun, 2019-01-06 at 12:00 -0600, cctalk-requ...@classiccmp.org wrote:
> > Re: off topic - capatob - saratov2 computer Russsian pdp8
>
> Nothing has changed as regards the number of bits in a byte, a nybble
> is 4 bits, 8 to the byte, and x to the word - this last varies widely
> depending on architecture.
>
> Still, in Spirit, on an octal processor a whole number is a six bit
> 'byte', so the term is appropriate, especially to avoid confusion with
> the word size of two six bit 'bytes'.
>
> Fun.
>
> Jeff
>


Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-06 Thread allison via cctalk
On 01/06/2019 01:19 PM, Noel Chiappa via cctalk wrote:
> > From: Grant Taylor
>
> > Is "byte" the correct term for 6-bits?  I thought a "byte" had always 
> > been 8-bits.
>
> I don't claim wide familiary with architectural jargon from the early days,
> but the PDP-10 at least (I don't know about other prominent 36-bit machines
> such as the IBM 7094/etc, and the GE 635/645) supported 'bytes' of any size,
> with 'byte pointers' used in a couple of instructions which could extract and
> deposit 'bytes' from a word; the pointers specified the starting bit, and the
> width of the 'byte'. These were used for both SIXBIT (an early character
> encoding), and ASCII (7-bit bytes, 5 per word, with one bit left over).
As far as what other systems supported especially the 7094 and GE, that
is already out
of context as the focus was a Russian PDP-8 clone.  Any other machines
are then thread
contamination or worse.

In the early days a byte was the smallest recognized group of bits for
that system
and in some case its 9 bits, 6bits as they were even divisible segments
of the machine
word.  This feature was the bane of programmers as everyone had a
different idea
of what it was and it was poison to portability.

For PDP-8 and friends it was 6 bits and was basically a halfword, also
used as stated for
6bit subset of ASCII (uppercase, TTY codes).  Most of the 8 series had
the bit mapped
instructions (DEC called the microcoded) for doing BSW, byte swap,  swap
the
lower half of the ACC with the upper half.  Very handy for doing
character IO.

> > I would have blindly substituted "word" in place of "byte" except for
> > the fact that you subsequently say "12-bit words". I don't know if
> > "words" is parallel on purpose, as in representing a quantity of two
> > 6-bit word.
>
> I think 'word' was usually used to describe the instruction size (although
> some machines also supported 'half-word' instructions), and also the
> machine's 'ordinary' length - e.g. for the accumulator(s), the quantum of
> data transfer to/from memory, etc. Not necessarily memory addresses, mind -
> on the PDP-10, those were 18 bits (i.e. half-word) - although the smallest
> thing _named_ by a memory addresses was usually a word.
>
>   Noel
The PDP-8 and 12bit relations the instruction word and basic
architecture was 12bit word.
There were no instructions that were a half word in length or other
fragmentations.  The
machine was fairly simple and all the speculated concepts were well
outside the design
of the PDP-5/8 family.   For all of those the instruction fetch, memory
reads and write
were always words of 12bits.   I'd expect a Russian PDP-8 clone to be
the same.   After
all DEC did widely gave out the data books with nearly everything but
schematics.  The
value of copying is software is also copied.  It happened here with the
DCC-112 a
PDP-8e functional clone.

While its possible to use half word ram with reconstruction the hardware
cost is high
(registers to store the pieces) and it would take more to do that than
whole 12bit words.
Any time you look at old machine especially pre-IC registers were costly
and only done
as necessity dictated as a single bit flipflop was likely 4 transistors
(plus diodes and other
components) or more to implement never minding gating. 

Minor history and thread relative drift... 
The only reason people didn't build their own PDP-8 in the early 70s was
CORE.  It was
the one part a early personal computer (meaning personally owned then) 
was difficulty
to duplicate and expensive outright buy.  Trying to make "random" core
planes that
were available work was very difficult due to lack of data, critical
timing, and the
often minimal bench (and costly) test equipment.   The minimum gear for
seeing
the timing was a Tek-516 and that was $1169(1969$).   Semiconductor ram was
either a few bits (4x4) or 1101 (three voltage 256x1) at about 8$ in
1972 dollars.  That
made the parts for a 256x12 a weeks pay at that time (pre-8008) and a
4Kx12 with parts
was nearly that of a new truck (2100$)!.   Compared the basic logic of
the 8e (only
three boards of SSI TTL) core/ram was the show stopper.  About 7 years
later a 8K8
S100 ram was about  (early 1979) 100$, by 1980 64kx8 was 100$.   Moore's
law was
being felt.

The small beauty of being there...   FYI back then (1972) a 7400 was
about 25 cents
and 7483 adder was maybe $1.25.  Least that's what I paid.

Allison



Re: off topic - capatob - saratov2 computer Russsian pdp8? HELP

2019-01-06 Thread Grant Taylor via cctalk

On 1/6/19 11:25 AM, Guy Sotomayor Jr via cctalk wrote:
I think it’s also telling that the IETF uses the term octet in all of 
the specifications to refer to 8-bit sized data.  As “byte” (from 
older machines) could be anything and is thus somewhat ambiguous.


It *may* have been the IBM 360 that started the trend of Byte == 8-bits 
as the 360’s memory (in IBM’s terms) was byte addressable and the 
instructions for accessing them were “byte” instructions (as opposed 
to half-word and word instructions).


Thank you for the clarification.

My take away is that before some nebulous point in time (circa IBM's 
360) a "byte" could be a number of different bits, depending on the 
computer being discussed.  Conversely, after said nebulous point in time 
a byte was standardized on 8-bits.


Is that fair and accurate enough?  -  I'm wanting to validate the patch 
before I apply it to my mental model of things.  ;-)




--
Grant. . . .
unix || die


  1   2   >