Re: [OT] Generator testing
Bruce accidentally sent this off-list, but said I could forward it here for benefit of all... Thanks again, Bruce, for the good info! -- Forwarded message -- From: Bruce Dawson j...@codemeta.com Date: Wed, Sep 9, 2009 at 10:14 PM Subject: Re: [OT] Generator testing To: Ben Scott dragonh...@gmail.com Ben Scott wrote: I know many UPSes don't like the output from some generators, for example -- they stay on battery (which runs down and then drops the load). My experience has been this is because the UPS doesn't like the phase output of the generator. This can usually be resolved by using a generator that has sine wave output. Another (probably easier to find) solution is to use a generator that has a 220 socket. Some of the better UPSs don't like the phases output by single-coil generators, and getting a generator with a 220 socket tends to generate a better waveform. Note that you don't have to use the 220 receptacle, just find a generator that has the capability. Keep in mind that generators will tend to raise or drop voltage whenever the load changes, however, all of our UPSs (mostly APC) tend to cope with that. They'll beep until the generator is able to come back up to speed. If the generator doesn't come up to speed, then you're probably loading it beyond its capacity - or have a really cheap one that doesn't auto-throttle according to the load (or have a governor controlled throttle). --Bruce ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: [OT] Generator testing
Ben Scott wrote: Bruce accidentally sent this off-list, but said I could forward it here for benefit of all... Thanks again, Bruce, for the good info! Thank you for the forward -- Forwarded message -- From: Bruce Dawson j...@codemeta.com Date: Wed, Sep 9, 2009 at 10:14 PM Subject: Re: [OT] Generator testing To: Ben Scott dragonh...@gmail.com Ben Scott wrote: I know many UPSes don't like the output from some generators, for example -- they stay on battery (which runs down and then drops the load). My experience has been this is because the UPS doesn't like the phase output of the generator. This can usually be resolved by using a generator that has sine wave output. Another (probably easier to find) solution is to use a generator that has a 220 socket. Some of the better UPSs don't like the phases output by single-coil generators, and getting a generator with a 220 socket tends to generate a better waveform. Note that you don't have to use the 220 receptacle, just find a generator that has the capability. Keep in mind that generators will tend to raise or drop voltage whenever the load changes, however, all of our UPSs (mostly APC) tend to cope with that. They'll beep until the generator is able to come back up to speed. If the generator doesn't come up to speed, then you're probably loading it beyond its capacity - or have a really cheap one that doesn't auto-throttle according to the load (or have a governor controlled throttle). --Bruce I have a generac/siemens 13kva generator wired through a transfer switch to my 150A service. What Bruce states above is the case with this setup. The generator puts out slightly less that 110 volts at 58Hz at the low point and when put under heavier load comes closer to 60hz, 110. I can hear the rpm's of the generator fluctuating when this happens. Load increases, voltage drops slightly, governor increases the throttle. This is normal and expected behavior from the generator. When I had the system installed I was informed that I would have to select UPS units that can condition power within a window of voltage/power/frequency input. I am using an APC smart ups on my rack that can deal with this quite well with minimal switching from battery to outlet. The good news is that the generator comes up in 20 seconds so the UPS capacity does not have to be hours, but minutes of run time for the load to deal with the 20 seconds of down time before the transfer switch changes the power source from utility to generator; and the few seconds of fluctuation out of the UPS's input tolerance that happens during run time. I had a few cheapo UPS units that have zero input tolerance and can be used as door stops. The ice storm last winter was a 5 day test. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: [OT] Generator testing
Sounds like a good explanation, but doesn't fit with my experience. Several years ago I had a problem with my UPS refusing to let the generator take over. It was a high quality (Honda) generator with more than enough capacity and both 120 and 240 output. I suspected that the problem was related to the waveform but was to lazy to troubleshoot the problem. I just took the UPS out and let the generator do its job. Bottom line is that you should actually test your UPS/generator combination together to make sure that they are compatible. Mike Miller On Thu, 2009-09-10 at 08:25 -0400, Ben Scott wrote: Bruce accidentally sent this off-list, but said I could forward it here for benefit of all... Thanks again, Bruce, for the good info! -- Forwarded message -- From: Bruce Dawson j...@codemeta.com Date: Wed, Sep 9, 2009 at 10:14 PM Subject: Re: [OT] Generator testing To: Ben Scott dragonh...@gmail.com Ben Scott wrote: I know many UPSes don't like the output from some generators, for example -- they stay on battery (which runs down and then drops the load). My experience has been this is because the UPS doesn't like the phase output of the generator. This can usually be resolved by using a generator that has sine wave output. Another (probably easier to find) solution is to use a generator that has a 220 socket. Some of the better UPSs don't like the phases output by single-coil generators, and getting a generator with a 220 socket tends to generate a better waveform. Note that you don't have to use the 220 receptacle, just find a generator that has the capability. Keep in mind that generators will tend to raise or drop voltage whenever the load changes, however, all of our UPSs (mostly APC) tend to cope with that. They'll beep until the generator is able to come back up to speed. If the generator doesn't come up to speed, then you're probably loading it beyond its capacity - or have a really cheap one that doesn't auto-throttle according to the load (or have a governor controlled throttle). --Bruce ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
Bruce Labitt writes: Kevin D. Clark wrote: 2: Typically, binary stuff is sent over the network in network byte order and network byte order is big-endian. This statement is not universally agreed to -- in fact I used to work at a shop where they'd never even considered this problem and it turned out that they were sending (most) stuff over the wire in little-endian format. That only works if both ends are the same - definitely not portable. In my case, the client is little-endian and the server is big-endian. No, that always works and it is definitely portable. Read what I said again: when you transmit binary integers onto the wire, make sure they exist in network-byte-order. May I politely suggest that you consult a decent computer networking book? Please take a look at the functions htonl() and ntohl(). Question: from your various postings on this list, I gather that you are using MPI. If this is true, why aren't you just using things like MPI_INT, MPI_DOUBLE, and possibly MPI_LONG_LONG? Why not let your MPI library take care of details like this for you? I guarantee you that any decent MPI implementation is going to be well-debugged and efficient. It should also take care of any endian issues that you might encounter. Regards, --kevin -- GnuPG ID: B280F24EGod, I loved that Pontiac. alumni.unh.edu!kdc-- Tom Waits http://kdc-blog.blogspot.com/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
gnhlug-discuss-boun...@mail.gnhlug.org wrote on 09/10/2009 10:12:12 AM: Bruce Labitt writes: Kevin D. Clark wrote: 2: Typically, binary stuff is sent over the network in network byte order and network byte order is big-endian. This statement is not universally agreed to -- in fact I used to work at a shop where they'd never even considered this problem and it turned out that they were sending (most) stuff over the wire in little-endian format. That only works if both ends are the same - definitely not portable. In my case, the client is little-endian and the server is big-endian. No, that always works and it is definitely portable. Read what I said again: when you transmit binary integers onto the wire, make sure they exist in network-byte-order. Semantics... little-endian format != network-byte-order = big-endian format. little-endian == pack hton == == network-byte-order == big-endian format I'm trying to write a 'universal' pack and unpack for C that supports the following types - 8 bit, 16 bit, 32 bit, 64 bit signed and unsigned ints and float32 as well as float64. May I politely suggest that you consult a decent computer networking book? Please take a look at the functions htonl() and ntohl(). That is where my problems started :) I started with something in a book. Unfortunately my htonl() and ntohl() are 32 bit at best. The documentation for C and 64 bits is sketchy. It has been a pain to say the least - everyone has a different implementation and it is hard to find difinitive answers sometimes. I compute in a 64 bit world... with doubles, and complex doubles. Question: from your various postings on this list, I gather that you are using MPI. If this is true, why aren't you just using things like MPI_INT, MPI_DOUBLE, and possibly MPI_LONG_LONG? Why not let your MPI library take care of details like this for you? I guarantee you that any decent MPI implementation is going to be well-debugged and efficient. It should also take care of any endian issues that you might encounter. Actually, I am using OpenMP on my Cell to use all 4 PPC threads, and FFTW to use all the SPUs for FFTs. Since I only have one blade, I see no value in MPI. If I had more than a single blade, MPI rapidly becomes more attractive. Regards, --kevin -- GnuPG ID: B280F24EGod, I loved that Pontiac. alumni.unh.edu!kdc-- Tom Waits http://kdc-blog.blogspot.com/ Cheers, Bruce Something clever should go here... ** Neither the footer nor anything else in this E-mail is intended to or constitutes an brelectronic signature and/or legally binding agreement in the absence of an brexpress statement or Autoliv policy and/or procedure to the contrary.brThis E-mail and any attachments hereto are Autoliv property and may contain legally brprivileged, confidential and/or proprietary information.brThe recipient of this E-mail is prohibited from distributing, copying, forwarding or in any way brdisseminating any material contained within this E-mail without prior written brpermission from the author. If you receive this E-mail in error, please brimmediately notify the author and delete this E-mail. Autoliv disclaims all brresponsibility and liability for the consequences of any person who fails to brabide by the terms herein. br ** ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: great big gobs of RAM and piles of cores to boot
Just an FYI, I've experienced some unpleasant problems with local serial consoles breaking with Ubuntu Hardy under Xen 3.2 (xen-tools debootstrap). It may have a lot to do with the Debian Lenny host system I was using though. If it's certified by Canonical and you can get support you should be fine on the distro front. I think your most worrying concern should be the xen network bridges. Eric Alan Johnson wrote: On Tue, Sep 8, 2009 at 6:33 PM, Eric Stein t...@des.truct.org mailto:t...@des.truct.org wrote: What is Unbuntu Hardy server going to be running as? Dom0 or DomU? Both? Both. Probably one or 2 Jaunty guests as well. If DomU, how are you planning to bootstrap? I forget the default in Ubuntu xen-tools, but dbootstrap perhaps? I typically make some template images and copy them around, editing config, so I forget the boot strap options after a bit. What CPUs are you racking? By racking you mean putting in the rack? If so, these are Opterons. 8000 series in the 685s and 2435s in the 465s, if I get any of those. -- Alan Johnson a...@datdec.com mailto:a...@datdec.com ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
On 09/10/2009 10:12 AM, Kevin D. Clark wrote: Bruce Labitt writes: Kevin D. Clark wrote: 2: Typically, binary stuff is sent over the network in network byte order and network byte order is big-endian. This statement is not universally agreed to -- in fact I used to work at a shop where they'd never even considered this problem and it turned out that they were sending (most) stuff over the wire in little-endian format. That only works if both ends are the same - definitely not portable. In my case, the client is little-endian and the server is big-endian. No, that always works and it is definitely portable. Read what I said again: when you transmit binary integers onto the wire, make sure they exist in network-byte-order. May I politely suggest that you consult a decent computer networking book? Please take a look at the functions htonl() and ntohl(). Question: from your various postings on this list, I gather that you are using MPI. If this is true, why aren't you just using things like MPI_INT, MPI_DOUBLE, and possibly MPI_LONG_LONG? Why not let your MPI library take care of details like this for you? I guarantee you that any decent MPI implementation is going to be well-debugged and efficient. It should also take care of any endian issues that you might encounter. Since this is my field of expertise I might as well comment. First, the htonl(3) and ntohl(3) functions are designed for 32-bit integers only. They will not work for 64-bit longs. (or long-longs). I suggest you use the bswap_64 or bswap_32 macros. For data communication I strongly recommend not sending binary integral or floating point data. Floating point is also problematical. A C double (IEEE) is 1 sign bit, 11 exponent bits, and 52 mantissa bits. Certainly if you are using MPI, then the MPI provided functions are useful. Also,, I believe the bswap macros are defined in the latest C standard, not the 1989. Also, most little endian systems (x86, Digital Alpha, IA64) implement these macros to perform the proper byte swap into net byte order, and big endian systems (such as Sun SPARC, HP PARISC and IA64 (HP-UX)) set the macros as no-ops. Many systems use a TLD (Type, Length, Data) notation to send data. This way all data transmitted is in an ASCII (or EBCDIC) external encoding so the reciving and sending systems can be of different endians, or even have different integer sizes, such as 36-bits. remember that the C language does not specify an integer size, just a range. Today most 64-bit Unix/Linux systems use the LP64 notation, but windows uses (I think but may be wrong ILP64). HP and IBM have some very good online white papers and porting guides dealing not only with endians, but also with 32-bit to 64-bit issues. I wrote some of the HP stuff and most of it is still the text I wrote. -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
We keep seeing the recommendation to use highly-portable encodings when possible, e.g., ASCII, or some kind of self-descriptive encoding. Which I fully agree is a very good idea. But assume for the sake of discussion we want to keep overhead as low as possible for performance reasons, and wait until computers get faster isn't a practical solution. What techniques, best practices, de facto standards, popular libraries, etc., exist for this sort of thing? Obviously, putting unsigned integers into network byte order for transmission is one such best practice. What about signed integers? Can one expect hton*() and ntoh*() to work for signed integers as well? IIRC, most machines store signed ints in two's-complement format, which I think would survive and work properly if swapped to compensate for an endianess change, but I'm not sure. What about floating point? -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
Ben Scott writes: We keep seeing the recommendation to use highly-portable encodings when possible, e.g., ASCII, or some kind of self-descriptive encoding. Which I fully agree is a very good idea. But assume for the sake of discussion we want to keep overhead as low as possible for performance reasons, and wait until computers get faster isn't a practical solution. What techniques, best practices, de facto standards, popular libraries, etc., exist for this sort of thing? If somebody were to disallow me from suggesting a solution to this problem which relied upon ASCII text, then my next proposed solution would be to use ASN.1 with either a DER or PER encoding. If you think that this is a better solution, then more power to you. Obviously, putting unsigned integers into network byte order for transmission is one such best practice. What about signed integers? Can one expect hton*() and ntoh*() to work for signed integers as well? IIRC, most machines store signed ints in two's-complement format, which I think would survive and work properly if swapped to compensate for an endianess change, but I'm not sure. Yes, htonl() et al. work just fine with two's compliment machines. IIRC, I haven't worked with a machine that didn't use a two's compliment as its internal representation for integers in over two decades, so I really am not enthusiastic about investigating this particular problem. Yeah, yeah, somebody might be tempted at this point to chirp up with a statement like but I used to use the lovely Frobozz 9000 machine (which used one's-compliment) back in the 80's to perform ray-tracing and accounting functions!!!, to which I will pre-emptively respond Yawn.. What about floating point? ASN.1 and BER/PER encoding... Regards, --kevin -- GnuPG ID: B280F24EGod, I loved that Pontiac. alumni.unh.edu!kdc-- Tom Waits http://kdc-blog.blogspot.com/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
On 09/10/2009 02:54 PM, Ben Scott wrote: What techniques, best practices, de facto standards, popular libraries, etc., exist for this sort of thing? MPI was already mentioned: http://www.open-mpi.org/faq/?category=supported-systems#heterogeneous-support those guys are performance freaks. Here's a new library for working with its data types: http://docs.google.com/gview?a=vq=cache:Di0F4vRzSb8J:www.mcs.anl.gov/uploads/cels/papers/P1635.pdf+mpi+1.3+data+typeshl=engl=us -Bill -- Bill McGonigle, Owner BFC Computing, LLC http://bfccomputing.com/ Telephone: +1.603.448.4440 Email, IM, VOIP: b...@bfccomputing.com VCard: http://bfccomputing.com/vcard/bill.vcf Social networks: bill_mcgonigle/bill.mcgonigle ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
Bruce/Ben, I've some experience with binary oriented endian issues on about 15 different platforms (Sun, SGI, Intel/AMD Windows PCs, Tandem/Compaq/HP NonStop 'mainframes', HP UX workstations, Linux, DEC Unix, and several flavors of Unix that probably do not still exist). Basically INT signed or unsigned byte swaps (hton/ntoh for 32 bit and custom for 64 bit) have always worked as far back as I can remember (about 15 years). Float and doubles are ALWAYS a pain as different, non endian related, format standards exist. I assume since this is up for debate, that this is a homegrown protocol and there is no defined and documented 'network standard' format. The simplest and most obvious answer is pick one (which you seem to be trying to do). Which one is a bit more complex as this long winded post will hopefully show. Typically integers use big endian format as the network format so go with that. When it comes to floats/doubles, I'd suggest you simply pick a format as the 'standard' for this protocol and move on. I've seen people get hung up on this point delaying schedules for weeks. If you are defining the protocol it is hard to be 'wrong' (but easy to be inconvenient :). A typical 'standard' floating point network format is a sign bit followed by N exponent bits followed by M 'mantissa' bits, see IEEE 754-2008 interchange format for a more 'in depth' discussion. If you do not mind a custom float encoding as part of the network format, several come to mind. A choice I've used in the past is to define the number of significant decimal digits in my 'network floating point' format and then use 32 or 64 bit ints, basically a predefined scaled integer approach. For example if I define my network floating point to be 4 digits to the right of the decimal as a format (x.) I can multiply the float by 1, round up and then transmit as an integer (the receiver converts the int to host format then divides by 1 to get the float back). This of course only works if your 'adjusted floats' will always fall into the usable int64 range and always have a reasonable 'fixed' number of decimal digits, not always possible but REALLY convenient when it works (some signal processing apps used to use this). This method theoretically has the most significant digit support as all 64 bits generate significant digits (typically moot as some get tossed due to the fixed scaling). Another technique is to store as a signed 59 bit int (sign bit + 58 unsigned int bits) and an 5 bit value to define the number of decimal places, providing something like 15 significant digits split up any which way about the decimal point (I believe this is a variant of IEEE 754-2008). This is basically similar to a 'normal C float' but has defined order independent of the CPU/OS and avoids mantissa conversion which can get ugly. Depending on how much you can squeeze the size of the exponent down (5 bits? 4 bits?), this theoretically has the second best support for large numbers of significant digits (in practice it may exceed the first example depending on data ranges). I'm sure you can come up with several other custom techniques. The upside of custom encodings is they can be very space efficient. The downside is they may be very custom to the protocol. This is usually OK if the protocol is 'internal only' (or hidden behind an API) but can look somewhat hokey if the protocol is to be published (the fixed value 'scaled int32/64' approach is not unheard of in some sensor manufacturer docs from years back and the sign + 58 bit int + 5 bit based 10 exponent is similar to IEEE 754-2008). It appears you dislike suggestions of 'use ASCII' as they are space inefficient but yet you seem to want a 'standard'. Well my experience tells me that the new 'standard' is ASCII/XML, the old 'standard' is something along the lines of IEEE 754 (a defined sign+exponent+mantissa format, you could even use a cell processor float as 'network float' since it is 'your' standard being defined). The most space efficient 'network format' is likely to be custom based on knowledge of the specific application data you are processing (e.g., signal amplitude on a scale of -100.00 to +100.00 easily fits in an INT16, data values locked between -.99 and +.99 easily fit in a byte). So my recommendation is big endian ints, and either IEEE 754-2008 interchange format floats or custom floats if your app REALLY requires it. If you are ALWAYS going to use Cell processors, you could even define the Cell float format as the network format to avoid ntoh and hton costs on the Cell. Good luck. Ben Scott wrote: We keep seeing the recommendation to use highly-portable encodings when possible, e.g., ASCII, or some kind of self-descriptive encoding. Which I fully agree is a very good idea. But assume for the sake of discussion we want to keep overhead as low as possible for performance reasons, and wait until computers get faster isn't a practical solution.
Re: Packing/unpacking binary data in C - doubles, 64 bits
On Thu, Sep 10, 2009 at 5:06 PM, Dana Nowell dananow...@cornerstonesoftware.com wrote: Which one is a bit more complex as this long winded post will hopefully show. Thanks for that response; it's very informative! A typical 'standard' floating point network format is a sign bit followed by N exponent bits followed by M 'mantissa' bits, see IEEE 754-2008 interchange format for a more 'in depth' discussion. What about byte order (endianess)? Does IEEE 754-2008 define that? When Bruce asked I Google'ed but found inconsistent answers and gave up. I suspect it may be that the original IEEE 754 spec did not specify but later revisions did; is that the case? Are there standard or common C facilities to convert whatever one's machine might use for float to something sure to be IEEE 754-2008? It appears you dislike suggestions of 'use ASCII' as they are space inefficient ... His application is in the realm of an HPC application -- the lower end of HPC, but there. In the past he has mentioned 100+ megabytes of data. The CPU overhead of doing ASCII/binary and back again can become rather significant. So can the increased network bandwidth used by a less-compact ASCII representation. Just for the sake of example: Bruce said 160 MB of data. Let's assume it's all 4-byte integers. That's roughly 42 million integers. Calling sprintf() and sscanf() 42 million times is going to slow things down. Likewise, if we assume a newline separated format and all significant digits used, an ASCII representation is going to use 11 bytes per integer, turning 160 MB into 440 MB. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
gnhlug-discuss-boun...@mail.gnhlug.org wrote on 09/10/2009 04:33:10 PM: Ben Scott writes: We keep seeing the recommendation to use highly-portable encodings when possible, e.g., ASCII, or some kind of self-descriptive encoding. Which I fully agree is a very good idea. But assume for the sake of discussion we want to keep overhead as low as possible for performance reasons, and wait until computers get faster isn't a practical solution. What techniques, best practices, de facto standards, popular libraries, etc., exist for this sort of thing? FWIW, I have something working for doubles now. And 64 bit integers up to 2^53. That sounds curiously like a floating point problem. Packs and unpacks chars, 16 bit ints, 32 bit ints, 64 bit ints (with the limit noted) SP floats and DP floats. Any suggestions to look for a 2^53 type problem? If I was any good at C, this would be A LOT EASIER... :P If somebody were to disallow me from suggesting a solution to this problem which relied upon ASCII text, then my next proposed solution would be to use ASN.1 with either a DER or PER encoding. If you think that this is a better solution, then more power to you. I can't imagine ever using that. I spent 3 minutes drilling down on their site. From what I could glean, the ASN.1 standard effectively knows nothing about double or quad precision numbers, although it knows about arbitrary length integers. Obviously, putting unsigned integers into network byte order for transmission is one such best practice. What about signed integers? Can one expect hton*() and ntoh*() to work for signed integers as well? IIRC, most machines store signed ints in two's-complement format, which I think would survive and work properly if swapped to compensate for an endianess change, but I'm not sure. Yes, htonl() et al. work just fine with two's compliment machines. The code snippet with all the uint64_t's ( htonll() ) dies for N2^53. So it is not so straight forward... What about floating point? ASN.1 and BER/PER encoding... Where is there a reference to sending DPFP in ASN.1 or BER/PER encoding, maybe I missed it? Regards, Bruce ** Neither the footer nor anything else in this E-mail is intended to or constitutes an brelectronic signature and/or legally binding agreement in the absence of an brexpress statement or Autoliv policy and/or procedure to the contrary.brThis E-mail and any attachments hereto are Autoliv property and may contain legally brprivileged, confidential and/or proprietary information.brThe recipient of this E-mail is prohibited from distributing, copying, forwarding or in any way brdisseminating any material contained within this E-mail without prior written brpermission from the author. If you receive this E-mail in error, please brimmediately notify the author and delete this E-mail. Autoliv disclaims all brresponsibility and liability for the consequences of any person who fails to brabide by the terms herein. br ** ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
Dana Nowell dananow...@cornerstonesoftware.com wrote on 09/10/2009 05:06:24 PM: Bruce/Ben, I've some experience with binary oriented endian issues on about 15 different platforms (Sun, SGI, Intel/AMD Windows PCs, Tandem/Compaq/HP NonStop 'mainframes', HP UX workstations, Linux, DEC Unix, and several flavors of Unix that probably do not still exist). Basically INT signed or unsigned byte swaps (hton/ntoh for 32 bit and custom for 64 bit) have always worked as far back as I can remember (about 15 years). Float and doubles are ALWAYS a pain as different, non endian related, format standards exist. I assume since this is up for debate, that this is a homegrown protocol and there is no defined and documented 'network standard' format. The simplest and most obvious answer is pick one (which you seem to be trying to do). Which one is a bit more complex as this long winded post will hopefully show. Thanks for your well thought out post. I think my standard is simple. Integers are converted to big-endian. Floats/doubles are converted to IEEE-754-2008 format and then converted to network format. I'm not trying to make a career of this, just something that works well enough so I can get my job done. A bonus is if it is a wee bit portable. The development tools on PC's are way better than on the Cell. snip It appears you dislike suggestions of 'use ASCII' as they are space inefficient but yet you seem to want a 'standard'. Well my experience tells me that the new 'standard' is ASCII/XML, the old 'standard' is something along the lines of IEEE 754 snip IEEE-754 is more than good enough. I do find it hard to believe for engineering heavy work that the standard is ASCII... Good luck. I'll need it :) Thanks! Bruce ** Neither the footer nor anything else in this E-mail is intended to or constitutes an brelectronic signature and/or legally binding agreement in the absence of an brexpress statement or Autoliv policy and/or procedure to the contrary.brThis E-mail and any attachments hereto are Autoliv property and may contain legally brprivileged, confidential and/or proprietary information.brThe recipient of this E-mail is prohibited from distributing, copying, forwarding or in any way brdisseminating any material contained within this E-mail without prior written brpermission from the author. If you receive this E-mail in error, please brimmediately notify the author and delete this E-mail. Autoliv disclaims all brresponsibility and liability for the consequences of any person who fails to brabide by the terms herein. br ** ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: [OT] Generator testing
Ben Scott writes: But it's funny, lots of people have disaster stories... it seems everyone knows what *not* to do, or what can go wrong... but if you ask what you *should* do, and people are less certain. :) umm.. ya. speaking of what not to do, don't keep adding load to your building transformer and not notice this. Worked at a place where after adding more and more servers and lab equipment every month, the transformer outside the building was way overloaded 24/7. I think it was already scheduled for upgrade, but it wasn't soon enough. It started leaking oil into the parking lot one day where someone noticed it and called the power company. They got there really quick and shut it off right now before it could fail spectacularly. -- Dave ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
Any suggestions to look for a 2^53 type problem? Well, just for fun, how about going back to basics - what does this little program generate on all the systems in question? #include stdio.h main( int argc, char *argv[] ) { printf( sizeof( double) %2u\n, sizeof( double) ); printf( sizeof( float) %2u\n, sizeof( float) ); printf( sizeof(long double) %2u\n, sizeof(long double) ); printf( sizeof(long signed long) %2u\n, sizeof(long signed long) ); printf( sizeof(long unsigned long) %2u\n, sizeof(long unsigned long) ); printf( sizeof( signed char) %2u\n, sizeof( signed char) ); printf( sizeof( signed long) %2u\n, sizeof( signed long) ); printf( sizeof( signed short) %2u\n, sizeof( signed short) ); printf( sizeof( unsigned char) %2u\n, sizeof( unsigned char) ); printf( sizeof( unsigned long) %2u\n, sizeof( unsigned long) ); printf( sizeof( unsigned short) %2u\n, sizeof( unsigned short) ); } ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
Ooops - I forgot about the void * ... Any suggestions to look for a 2^53 type problem? Well, just for fun, how about going back to basics - what does this little program generate on all the systems in question? #include stdio.h main( int argc, char *argv[] ) { printf( sizeof( double) %2u\n, sizeof( double) ); printf( sizeof( float) %2u\n, sizeof( float) ); printf( sizeof(long double) %2u\n, sizeof(long double) ); printf( sizeof(long signed long) %2u\n, sizeof(long signed long) ); printf( sizeof(long unsigned long) %2u\n, sizeof(long unsigned long) ); printf( sizeof( signed char) %2u\n, sizeof( signed char) ); printf( sizeof( signed long) %2u\n, sizeof( signed long) ); printf( sizeof( signed short) %2u\n, sizeof( signed short) ); printf( sizeof( unsigned char) %2u\n, sizeof( unsigned char) ); printf( sizeof( unsigned long) %2u\n, sizeof( unsigned long) ); printf( sizeof( unsigned short) %2u\n, sizeof( unsigned short) ); printf( sizeof( void *) %2u\n, sizeof( void *) ); /* Fprgot this one */ } ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Packing/unpacking binary data in C - doubles, 64 bits
Michael ODonnell wrote: Any suggestions to look for a 2^53 type problem? Well, just for fun, how about going back to basics - what does this little program generate on all the systems in question? #include stdio.h main( int argc, char *argv[] ) { printf( sizeof( double) %2u\n, sizeof( double) ); printf( sizeof( float) %2u\n, sizeof( float) ); printf( sizeof(long double) %2u\n, sizeof(long double) ); printf( sizeof(long signed long) %2u\n, sizeof(long signed long) ); printf( sizeof(long unsigned long) %2u\n, sizeof(long unsigned long) ); printf( sizeof( signed char) %2u\n, sizeof( signed char) ); printf( sizeof( signed long) %2u\n, sizeof( signed long) ); printf( sizeof( signed short) %2u\n, sizeof( signed short) ); printf( sizeof( unsigned char) %2u\n, sizeof( unsigned char) ); printf( sizeof( unsigned long) %2u\n, sizeof( unsigned long) ); printf( sizeof( unsigned short) %2u\n, sizeof( unsigned short) ); } ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ I'll try it tomorrow. I've done some of these, but not all in one fell swoop. I know that sizeof(double) = 8 bytes and sizeof(long long) is 8 bytes on my P4. Pretty sure it is the same on the Cell, but I'll know for sure tomorrow. -Bruce ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/