Hello Dmitry, Thanks for your reply, but I think, that you haven't understood me what I am trying to explain.
I have binary contents which are copied from memory using by an application which is written in Delphi. I am porting an application to MacOS X 10.5, but I need to use the same structures as I have in Delphi and Kylix where "NOT PACKED" records are aligned. The fact is, that FPC compiler on intel based Mac OS X builds up application which contains records "packed" by 4 bytes by default. I need to get the same behavior in intel based mac os x as I have in other platforms. What should I do to build FPC compiler which will behave the same on all platforms. TRoland; <<< 27.4.2009 17:04 - dmitry boyarintsev "skalogryz.li...@gmail.com" >>> db> use: db> TYPE TMyOne = packed record db> First:integer; db> Second:extended; db> END; db> to be sure about 20 bytes boundary. db> thanks, db> dmitry db> Hello All, db> db> I have found some new information about this problem: db> db> 1. db> db> TYPE TMyOne =record db> First:integer; db> Second:integer; db> end; db> db> ... then the size of this object is 2*4 bytes. db> db> 2. db> db> TYPE TMyOne =record db> First:integer; db> Second:double; db> END; db> db> ... where I would expect 4 + 8 bytes = 12 bytes db> ... but the size of this object is 16! db> db> db> 3. db> db> TYPE TMyOne =record db> First:integer; db> Second:extended; db> END; db> db> ... where I would expect 4 + 16 bytes = 20 bytes db> ... but the size of this object is 32! db> db> db> So this proves me, that compiler tries to align the data structure to db> multiplied size of biggest element to which fit all elements. db> db> I have understood this behavior, but this happens me in these cases: db> db> 1. Delphi 7 db> 2. Kylix 3 db> 3. Lazarus on Ubuntu db> 4. Lazarus on PowerPC Mac OS X 10.5 db> db> but on Intel Based Mac OS X 10.5 Mini with Intel Duo 2 Core it db> DOESN'T. Therefore I have problems with parsing of binaries back to db> memory. db> db> I don't expect CPU specific problem, otherwise it would not work db> active projects, therefore I think it must be compiler specific db> problem. db> db> Do you have any idea? db> db> Thanks in advance. db> db> Greetings, TRoland; db> db> db> <<< 25.4.2009 9:43 - Roland Turcan "k...@rotursoft.sk" >>> RT>> Hello Diettrich, db> RT>> To tell the truth this code and style I got from previous developer RT>> and I really don't know why he decided to get the size of header from RT>> the binary instead of getting its size. The fact is, that this RT>> "optimistic variant" of coding is on many places. :-| db> RT>> TRoland; db> RT>> <<< 24.4.2009 19:56 - Hans-Peter Diettrich "drdiettri...@aol.com" >>> HPD>>> Roland Turcan schrieb: db> >>>> BB> How is HeaderLen declared ? >>>> >>>> Stream.Read (HeaderLen, SIZEOF (HeaderLen)); >>>> >>>> where information header's length is stored into binary. db> HPD>>> The you should verify that HeaderLen <= SizeOf(FHeader), before HPD>>> Stream.Read (FHeader, HeaderLen); db> HPD>>> Otherwise this statement will overwrite the following FItem data, with HPD>>> the fatal consequences you already experienced. db> HPD>>> When a header will ever change in size (or structure), it's wise to HPD>>> store a version number in the data files. Then you can read the stored HPD>>> header data into the exactly applicable header type (record), and HPD>>> convert that record into the current THeader definition, field by field. db> HPD>>> DoDi -- Best regards, TRoland http://www.rotursoft.sk http://exekutor.rotursoft.sk _______________________________________________ Lazarus mailing list Lazarus@lazarus.freepascal.org http://www.lazarus.freepascal.org/mailman/listinfo/lazarus