On Mon, 16 Jan 2012 12:03:52 +1000, Da Rock wrote:
> On 01/14/12 22:06, Polytropon wrote:
> > On Sat, 14 Jan 2012 20:37:14 +1000, Da Rock wrote:
> >> On 01/14/12 19:54, Robert Bonomi wrote:
> >>>>    From owner-freebsd-questi...@freebsd.org  Sat Jan 14 02:32:15 2012
> >>>> Date: Sat, 14 Jan 2012 09:28:21 +0100
> >>>> From: Polytropon<free...@edvax.de>
> >>>> To: Robert Bonomi<bon...@mail.r-bonomi.com>
> >>>> Cc: freebsd-questions@freebsd.org
> >>>> Subject: Re: access(FULLPATH, xxx);
> >>>>
> >>>> On Sat, 14 Jan 2012 02:00:12 -0600 (CST), Robert Bonomi wrote:
> >>>>> To repeat some advice from one of my Computer Science professors, many 
> >>>>> years
> >>>>> ago, whenever I asked 'how does it work' questions: "Try it and find 
> >>>>> out."
> >>>> I bet my professor can beat up your professor. :-)
> >>>>
> >>>> Mine used to say several times: "Trial and error is NOT
> >>>> a programming concept!"
> >>> As far as writing applications goes, that is _somewhat_ correct.
> >>>
> >>> However, 'trial and error' is _not_ the same thing as 'try it and find 
> >>> out'.
> >>> See the entire subject area of 'benchmarking'.
> >>>
> >>> And,  the only way to definitively establish if an alternate approach is
> >>> 'better' -- i.e. 'faster', or 'smaller', or 'more efficient', etc. -- *IS*
> >>> to run a trial.
> >>>
> >>> Your professor undoubtedly would not of approved when I wrote bubble-sort
> >>> code that _out-performed_ any other sorting technique -- up to the limits
> >>> of memory.  Or when I re-wrote an application that used binary searches
> >>> of records, with a new version that used a brute-force linear search.  I
> >>> thought I could 'do it better/faster' than the existing code, but the only
> >>> way to "definitively" find out was to 'try it'.  And the 'trial' proved
> >>> out -- the replacement code was 'merely' somewhat over 100 times faster.
> >>> *grin*
> >> Ha! Love it... :D
> > Mee too - except that I didn't want to show that
> > "typical attitude". In fact, I tried to make a
> > (kinf of humourical) statement about a habit that
> > I could observe at many students when I was at
> > university.
> >
> > Background:
> >
> > When you write source code, you can make errors.
> > Compiler shows errors. Some students started
> > with "trial&  error" to just silence the compiler.
> > One form was that all functional parts of the
> > program were enclosed in /* and */ (it was a
> > C class) - no errors, but no action. A different
> > approach was to arbitrarily (!) change the source
> > code, something like that:
> >
> >     void *foo(int blah, void *meow())(int ouch);
> >
> > Hmmm... gives me segfaults. Maybe something's
> > wrong with the pointers?
> >
> >     void *foo(int blah, void **meow())(int ouch);
> >
> > Not much better, segfaults too. How about that?
> >
> >     void *foo(int blah, void meow())(int *ouch);
> >
> > Well... also not better. I've heared about parentheses,
> > maybe those can help?
> >
> >     void *foo(int blah), void *meow)(int ouch);
> >
> > Shit, doesn't even compile anymore! Uhm... _what_ did
> > I change? Oh wait, I know:
> >
> >     void *foo(int blah, (void *)meow())(int ouch);
> >
> > Just produces garbage, then segfaults... what could I
> > change next?
> >
> > I think you get the idea.
> >
> > Other students could not understand that even if a
> > program compiles without any errors, there _may_ be
> > the possibility that it doesn't do what they intended
> > it to do. They seemed to believe in some kind of
> > magical "semantic compiler":
> >
> >     int x, y, sum;
> >     x = 100;
> >     y = 250;
> >     sum = a - b;
> >
> > They expected the compiler to notice what's wrong here
> > if you consider the _meaning_ of the identifiers. It's
> > not that obvious if you use x, y, and z. :-)
> >
> >
> >
> >>> As far as 'doing it once' for the purpose of answering a 'how does it 
> >>> work'
> >>> question -- where one has _not_ read the documentation, *OR* the existing
> >>> documentation is _not_clear_, then simple experimentation -- to get *the*
> >>> authoritative answer -- is entirly justified.
> >>>
> >>> When I got the 'try it and find out' advice, I was asking questions about
> >>> situations where the language _specification_ was unclear -- there were
> >>> two 'reasonable interpretations' of what the language inthe speciication
> >>> said, and I just wanted to  know which one was the proper interpretation.
> >>>
> >>> Now, given that the language in the specification _was_ abiguous and both
> >>> interpretations were reasonsble, different compiler builders could have
> >>> implemented differently, and 'try it and find out' was _necessary_ to
> >>> establish what that particular implementation did.<grin>
> >> There appears to be 2 schools of thought on this subject: a classic case
> >> of the "old" vs the "new", in this case "punchcards/slow compilers" vs
> >> "gcc/all-in-one compile, link and go"of todays tech. I saw a similar
> >> conversation about 5 years ago on the linux lists... :)
> > I didn't want to complain about using a test case,
> > with determined variables (relative path vs. absolute
> > path) to see if the interpretation of "man 2 access"
> > was matching the actual inner workings of the function
> > in use. In fact, I would even judge this the _preferred_
> > method to be sure.
> >
> >
> >
> >> In the light of this conversation and given todays tech I'd say give it
> >> a shot unless you think something could break (as in fatal to service
> >> quality in production/hardware).
> > Fully agree. Know your variables and construct a
> > test within a fixed environment. The result will
> > be a valid source of conclusion.
> >
> > Now back to "trial&  error": what if I use
> > brackets instead?
> >
> >     void *foo(int blah, void *meow[])(int ouch);
> >
> > Hmmm... :-)
> I think the problem these days is a combination of many things.
> 
> Firstly, in the old days (I sound like grandpa... :/ ) punch cards were 
> hard to do, time consuming, and machine time was very expensive. So 
> programmers had to get it right the first time (or close to it), and 
> documentation was paramount.

Old man want history? Read this! :-)
http://www.columbia.edu/cu/computinghistory/fisk.pdf

In ye olden tymes, you could measure IT efficiency (even
though the term was probably quite different) in megawatts
per square foot, or even $ per square foot. This kind of
measuring "expensive machinery" (in terms of operating
them) has become present in our modern times again. And
documentation... well, that depends. However, obtaining
learing resources for efficiently _using_ what's available
have become much more easily to access today, primarily
because of the WWW. As failing to properly program does
not turn into accumulating costs ("charged per CPU time")
right away, you luckily don't have to pay that much
attention when you perform "learning by doing", which
in my opinion is the _only_ way to learn "IT stuff"
that works.



> Secondly, in the early years the internet wasn't exactly up and running 
> (as such), and so global programming teams weren't a problem with 
> language differences (and people were taught far better english and 
> speling- whoops spelling :) none of this and other shortenings; 
> ambiguity kept to a minimum).

The ability to use the english language was neccessary
in the earlier days, especially when 8-bit microcomputers
became available nearly everywhere. Internationalization
and localization wasn't done. CP/M messages and BASIC
keywords were all english. Whole programs such as WordStar
were used in their original (english) language by people
speaking a different language (e. g. german), still being
able to produce excellent work. Looking back at those times,
I think the language barrier is much stronger present in
our today's society than it was in the past. But maybe
that's just my individual observation here in Germany. :-)



> Thirdly, when things did become easier (gcc era?) the documentation 
> slipped, and programmers started getting more sloppy, as the mistakes 
> were easily fixed.

Compile modern software with -Wall and see the results
of "more sloppy". :-)



> The docs became more ambiguous, and language did 
> start slipping (globally- not just in computing).

In the past, those who provided software typically
also provided documentation. And those who provided
hardware also did. Today, documentation is typically
left to others, to the users, the communities, and
it is scattered across the web, into Wikis, web forums,
individual pages. The ability to use a search engine
has become mandatory. Software engineering strategies
that emphasize the _fast_ production of software seem
to judge documentation as optional, consuming resources
that could be spent better - and why not? When the
documentation is complete, the product it belongs to
has already been obsoleted and withdrawn.



> Fourthly, globalisation occured, internet was up and running on a global 
> scale, international teams were working on programs, and people were 
> attempting to translate japanese manuals into english (if you catch my 
> drift... :) I used to be a Xeroid and this was a standing joke). So not 
> all docs were as clear anymore.

It's worth noting that the _means_ of documentation
production have vastly improved (authoring systems,
text processors, use of graphics and so on), while
the quality of documentation produced that way does
not always have.

Setzen Kopfphon in Kopfphon Wagenwinde ein, gemappt
die Pfeife lange wie die Form B. :-)



> Lastly, we have the travesty of a lack of discipline in skills. Near 
> enough's good enough, and so on. No one is taking the time anymore to 
> become "skilled" - they want it now or never. Take a 6 week course and 
> become an expert. The masters and gurus are becoming few and far between 
> now (although there appears to be a nice concentration here- thats why I 
> stick around. Linux lists seem to have the cranky ones :) ). And so we 
> have the case as you have outlined Poly. That said the docs are getting 
> to be of not much help either unless you're partly clairvoyant too in 
> more cases than should be.

A big step in achieving to be a "skilled master" isn't
just bare knowledge, it's experience. And this requires
time. Nobody is willing to spend time in order to get
experience. Knowledge... well, you can easily obtain
knowledge today by "only" knowing how to properly search
for (and _find_) it. And for sure, you need to know how
to interpret the knowledge you find. But without experience,
what is knowledge worth? Knowledge without application
is ballast. On the other hand, knowledge is needed in
order to understand what's going on - especially in cases
where you're _supposed_ to know it. And by _using_ that
knowledge, you gain experience. In my opinion there is
no other way to gain it.

People make mistakes. And that's no problem as you can
learn from mistakes. Of course, you cannot do _all_ the
mistakes possible, so when you can, learn from other's
mistakes. But for a learning experience, always make
your own mistakes. No one is born a master.



> Myself I believe that one needs to read the docs thoroughly and then if 
> it is ambiguous then run a test case, if all else fails: ask.

Exactly my suggestion.



> But one 
> needs to be as exact as possible when doing anything.

That's what you learn in science theory 101: Determine
your variables as strict as possible. Change _one_ thing
per time, so you can conclude by observing your results
(that have changed, _if_ they have changed). Formulate
your algorithm to "answer a question" as precise as
possible, therefor: Know your question.



> "Any job worth 
> doing is worth doing properly", and "god/devil is in the details" - Is 
> say "God _and_ the devil is in the details": if you don't pay attention 
> to the details the devil _will_ make sure it bites you in the ass!

Details always matter. In small scale, when you write a
C program and miss a *, the whole program can do something
totally different, or even doesn't compile anymore. In
large scale, if you deal, for example, with database
request, be sure to do it _properly_ to get the results
you want. Only the correct results are the results you're
interested in - or you would be querying /dev/random instead
without the need of a database. :-)

This little thing hasn't changed in over 50 years that
computers are around. Many things have changed - but
details _still_ matter. Die, history, die!!! :-)



> Its a crazy world, though, isn't it? :)

It may belong to Arthur Brown. :-)




-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to