Has IBM established a standard in any of the various high-level languages for the representation of the various floating-point formats and precisions?
. Formats: o Binary o Decimal o Hexadecimal . Precisions: o Single (4-byte) o Double (8-byte) o Quadruple (16-byte) I am specifically looking at both fixed-point and scientific notation. John P. Baker ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html