I tried what Bill Campbell suggested: read the len first and then use that to populate the structdef length field for the string len = xstruct.structdef(xstruct.little_endian, [ ('len', (xstruct. unsigned_long, 1)), ])
l = len(buf[0:3]) rec = xstruct.structdef(xstruct.little_endian, [ ('len', (xstruct. unsigned_long, 1)), ('id', (xstruct.unsigned_long, 1)), ('name', (xstruct.string, (l.len-8))), ]) n = rec(buf[0:l.len]) print n.len, n.id, n.name On Sat, Dec 20, 2008 at 11:54 AM, Ravi Kondamuru <ravikondam...@gmail.com>wrote: > I am trying to use xstruct module to unpack a varaible size record with the > following structure. > > struct nameid { > u32bits len /* total length */ > u32bits id; > char name; /* name variable length */ > } > > As can be seen the length of the name = len - (sizeof(len) + sizeof(id)). > > How do I use xstruct or struct to unpack such a structure? > > n = xstruct.structdef(xstruct.little_endian, [ > ('len', (xstruct. unsigned_long, 1)), > ('id', (xstruct.unsigned_long, 1)), > ('name', (xstruct.string, <?>)), > ]) > > xstruct seems to expect the exact len to be specified in structdef. Is > there a way to tell it to go till the end of the buffer passed? > > thanks, > Ravi. > > On Fri, Dec 12, 2008 at 6:30 AM, bob gailer <bgai...@gmail.com> wrote: > >> Ravi Kondamuru wrote: >> >>> Denis, These are 32bit, 64bit counters (essentially numbers). >>> Bob, There are well over 10K counters in the log file that are updated >>> every 5 secs. If a counter1's graph was requested, log will have to be >>> parsed once to get the data points. If a user asked for a counter2, now it >>> needs to be retrieved from the log file. Which essentially means having to >>> go through the logfile again. This is because i want to try to avoid using a >>> database to store values after parsing the file. >>> >> Here is a little test program I wrote to check timing on a file of >> *100,000* 64 bit counters. >> >> import time, struct >> ctrs = 100000 >> s = time.time() >> # create a file of 800,000 characters (100,000 64 bit numbers) >> f = open('ravi.dat', 'wb') >> for i in range(ctrs): >> f.write('abcdefgh') # it does not matter what we write as long as it is 8 >> bytes. >> print time.time() - s >> >> s = time.time() >> l = [] >> # read the file >> f = open('ravi.dat', 'rb').read() >> >> # unpack each 8 characters into an integer and collect in a list >> for b in range(0, ctrs, 8): >> n = struct.unpack('q', f[b:b+8]) >> l.append(n) >> print time.time() - s >> >> Writing the file took 0.14 seconds on my computer >> Reading and unpacking 0.04 seconds. >> I think you can set performance issues aside. >> >> There is a principle in programming that proposes: >> 1 - get a program running. Preferably in Python as development time is >> much faster. >> 2 - check its performance. >> 3 - refine as needed. >> >> >> -- >> Bob Gailer >> Chapel Hill NC 919-636-4239 >> >> >
_______________________________________________ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor