> -----Original Message----- > From: [email protected] [mailto:mspgcc-users- > [email protected]] On Behalf Of Vitaly Luban > Sent: Sunday, 6 April 2008 5:36 AM > To: GCC for MSP430 - http://mspgcc.sf.net > Subject: Re: [Mspgcc-users] Freecale (Re: msp430.dll) > > N. Coesel wrote: > > At 20:29 04-04-08 -0700, Vitaly Luban wrote: > > > >> N. Coesel wrote: > >> > >>> If you write you code portable, you can use and CPU you want. > >>> > >> That is not true. First of all, certain "features" of the mspgcc make > it > >> difficult to > >> use even highly portable code on MSP w/o additional sizable effort. > >> > > > > In that case you are doing something very wrong. > > Really? Most of my apps one way or another are about wireless. Packed > structs are a must. > How mspgcc handles them comparing to other C compilers? > > > I'm using the same code > > (even the standard library from MSPGCC) on several ARM devices, MSP430 > and > > Renesas devices without any pain. Just hit compile and go (ofcourse the > low > > level hardware drivers and startup-code are different). > > > > Drivers are about 90% of all my development as of now. The rest is > stable for a long-long time.
My experience has almost always been the reverse. The time taken to write drivers is normally very minor compared to the application code itself. I like to reuse drivers as much as possible. If you're application code is so intimately tied to your driver that you must change the driver code to obtain additional functionality from the application code then that would suggest you're using a bad programming practice. > >>> Besides, > >>> debugging on an embedded device is not the fastest way to debug. > >>> > >> This is also not exactly true. While it is possible to do the "proof of > >> concept" this way, > >> real embedded code on a production board will have totally different > >> timing profile, so > >> any possible timing bug /race will possible be unnoticed in a PC > >> implementation, Then, > >> > > > > That is an often heard excuse. But in practise very few processes are > real > > time. Race conditions can be eliminated by carefull design. > > > > > Few? Well, "real" realtime, when someone die or something explode if > controller does > not react in time, maybe not. But performance critical - definitely. My > approach beats > yours hands down, earlier this year I had a pleasure to see jaws dropped > at some very large > japanese company when my design had shown double performance against > "theoretical > limit" predicted by "experts" from one large american company. So much > for your > "carefull design" allegedly eliminating races... I'm not sure of the merit of your argument here. You say that your approach beats N Coesel's, and then go on to describe a situation where your code clearly lacked any sufficient pre-commissioning testing, and endangered the machinery it controlled, and also produced what sounds like a very hazardous situation for all the people around that machinery. I would say from this example that your code lacked any "careful design", didn't have a sufficient level of code stability simulation, and seriously lacked structured code testing. > >> peripherals, like a/d and d/a converters etc. one can use on a PC will > >> have different > >> charaqcteristics like linearity, error magins, noise... That makes it > >> impossible to do real > >> world debugging using simulation. > >> > > > > I'd like to turn that around. On a PC you can simulate the extends of > > noise, non linearities, etc, etc to proof software works even if the > analog > > parts are on the edge of their specs. > > BS. While possible in theory, the amount of effort needed to create a > precise enough simulation > makes it nonfeasible in reality. Does your argument extend to testing? Because it's impossible to test every possible situation do you consider that testing is non-feasible in reality, and hence not do any testing? I don't believe it takes much effort to create a basic simulation of something at all. With windows forms (and GTK toolkits etc under linux) setting up a very basic code simulation is generally quite easy. You should ofcourse be writing code as modularly as possible, so testing each individual code module independently should really be quite simple. > > And ofcourse it takes some experience > > to know what kind of constructs work or won't work on an embedded > device. > > > > All constructs work equally everywhere. Well, as long, as compiler is > not buggy. :) > It's cost/performance difference that matters. And it's the cost that > makes some of them > nonfeasible. > > > Being able to maintain software is the key word here. Rewriting or > > maintaining different versions for different platforms is a real pain in > > the ass (and can kill a company because they got trapped by a solution). > > Just a few weeks ago I wrote a set of routines that will work on > > microcontrollers (like ARM, MSP430), Windows and Linux. This means a bug > > can be fixed or functionality added for all platforms in one go. Because > > the routines where designed with the microcontroller environment in mind > > they have a very low memory footprint if they have to. Using GCC as a > > compiler is a key factor here. > > > > Been there, heard that. Top level control app - maybe. Device and > application performance > critical code, that's ~90% of any controller application - never. > > >>> Just write > >>> for Windows/Linux first and then port your code to the embedded > device. > >>> > > And you'll end up with something that hardly fits into flash and subpar > on performance at best. As long as memory constraints are considered when programming the application code, and the driver model isn't overly heavy when implemented on the embedded controller then I fail to see how you would end up with something that "hardly fits into flash and subpar on performance at best". If the code you write for the PC environment is well considered then it should be identical (excluding driver and startup code) to that which will run on the embedded controller. If they differ significantly, then that is an issue with your original PC code and is in no one indicative of a problem with the actual test model, simply an indication that your 'programming genius' is not as unquestionable as you originally considered it. > >> Have you done any real project so far? Real device that's produced in > >> enough quantity > >> to force you to shave fractions of cents from the design as every extra > >> 0.1 cent in the BOM > >> means some $100,000.00 loss for you? Try it and you'll see for yourself > >> that you're wrong > >> here. > >> > > > > If you have to shave off such small amounts, then the exchange rate of > de > > dollar, oil prices, salary increases, etc have more influence on your > > product price than adding or removing a resistor from your design. And > yes > > I have been doing real projects for the past 15 years. > > > > 15 years? Hard to believe looking how you pile up factors that are > completely irrelevant > to one another into one heap... I wonder if you use the same approach in > your designs... :) Given the example you presented earlier, I would be much happier in accepting life-critical code from N. Coesel than I would be from yourself. I myself have never had to design something in which every cent counted. I've had some experience with companies in which a cent made a difference of $100k in profit for a particular product. When you're dealing with tight financial constraints the importance of device independence is even greater. You simply don't target the microcontroller until the last moment, as the microcontroller may well change next week (to save 10c in production)
