On 27/04/2017 1:28 AM, David W Noon wrote:
I am currently using an interpretive loop to format the decimal data. I am
going to look into using __EDMK() instead. (And yes, the volume, potentially
millions of iterations per day, justifies the effort.)
Do you know the size and scaling factor of the packed decimal values at
compile time? If so, the __EDMK() call could be faster than printf(), as
those format interpreters [printf(), sprintf(), etc.] often run like
treacle in winter.

Indeed. And C++ iostreams are even worse! I once significantly improved the performance of a C++ function by replacing ostringstream with
a simple routine that converted STCK units to std::string.

inline std::string stck2str( uint64_t stck) const
{
    std::string result;
result.reserve(16);
    stck >>= 12;             // convert to microseconds
    char decwork[16];        // decimal output work area
    __cvdg( stck, decwork ); // convert to decimal
char MASK[] = "\x40\x20\x20\x20\x20\x20\x20\x20\x21\x20\x4B\x20\x20\x20\x20\x20\x20";
    char time[sizeof MASK - 1];
    memcpy( time, MASK, sizeof time );
__ed( (unsigned char *)time, (unsigned char *)decwork + 8, sizeof time - 1 );
    int trimlen;
for ( trimlen = 0; trimlen < sizeof time && time[trimlen] == ' '; trimlen++ );
    result.append( time + trimlen, sizeof time - trimlen );
    return result;
}

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to