On 01/16/12 16:19, Polytropon wrote:
On Mon, 16 Jan 2012 12:03:52 +1000, Da Rock wrote:
On 01/14/12 22:06, Polytropon wrote:
On Sat, 14 Jan 2012 20:37:14 +1000, Da Rock wrote:
On 01/14/12 19:54, Robert Bonomi wrote:
From owner-freebsd-questi...@freebsd.org Sat Jan 14 02:32:15 2012
Date: Sat, 14 Jan 2012 09:28:21 +0100
From: Polytropon<free...@edvax.de>
To: Robert Bonomi<bon...@mail.r-bonomi.com>
Cc: freebsd-questions@freebsd.org
Subject: Re: access(FULLPATH, xxx);
On Sat, 14 Jan 2012 02:00:12 -0600 (CST), Robert Bonomi wrote:
To repeat some advice from one of my Computer Science professors, many years
ago, whenever I asked 'how does it work' questions: "Try it and find out."
I bet my professor can beat up your professor. :-)
Mine used to say several times: "Trial and error is NOT
a programming concept!"
As far as writing applications goes, that is _somewhat_ correct.
However, 'trial and error' is _not_ the same thing as 'try it and find out'.
See the entire subject area of 'benchmarking'.
And, the only way to definitively establish if an alternate approach is
'better' -- i.e. 'faster', or 'smaller', or 'more efficient', etc. -- *IS*
to run a trial.
Your professor undoubtedly would not of approved when I wrote bubble-sort
code that _out-performed_ any other sorting technique -- up to the limits
of memory. Or when I re-wrote an application that used binary searches
of records, with a new version that used a brute-force linear search. I
thought I could 'do it better/faster' than the existing code, but the only
way to "definitively" find out was to 'try it'. And the 'trial' proved
out -- the replacement code was 'merely' somewhat over 100 times faster.
*grin*
Ha! Love it... :D
Mee too - except that I didn't want to show that
"typical attitude". In fact, I tried to make a
(kinf of humourical) statement about a habit that
I could observe at many students when I was at
university.
Background:
When you write source code, you can make errors.
Compiler shows errors. Some students started
with "trial& error" to just silence the compiler.
One form was that all functional parts of the
program were enclosed in /* and */ (it was a
C class) - no errors, but no action. A different
approach was to arbitrarily (!) change the source
code, something like that:
void *foo(int blah, void *meow())(int ouch);
Hmmm... gives me segfaults. Maybe something's
wrong with the pointers?
void *foo(int blah, void **meow())(int ouch);
Not much better, segfaults too. How about that?
void *foo(int blah, void meow())(int *ouch);
Well... also not better. I've heared about parentheses,
maybe those can help?
void *foo(int blah), void *meow)(int ouch);
Shit, doesn't even compile anymore! Uhm... _what_ did
I change? Oh wait, I know:
void *foo(int blah, (void *)meow())(int ouch);
Just produces garbage, then segfaults... what could I
change next?
I think you get the idea.
Other students could not understand that even if a
program compiles without any errors, there _may_ be
the possibility that it doesn't do what they intended
it to do. They seemed to believe in some kind of
magical "semantic compiler":
int x, y, sum;
x = 100;
y = 250;
sum = a - b;
They expected the compiler to notice what's wrong here
if you consider the _meaning_ of the identifiers. It's
not that obvious if you use x, y, and z. :-)
As far as 'doing it once' for the purpose of answering a 'how does it work'
question -- where one has _not_ read the documentation, *OR* the existing
documentation is _not_clear_, then simple experimentation -- to get *the*
authoritative answer -- is entirly justified.
When I got the 'try it and find out' advice, I was asking questions about
situations where the language _specification_ was unclear -- there were
two 'reasonable interpretations' of what the language inthe speciication
said, and I just wanted to know which one was the proper interpretation.
Now, given that the language in the specification _was_ abiguous and both
interpretations were reasonsble, different compiler builders could have
implemented differently, and 'try it and find out' was _necessary_ to
establish what that particular implementation did.<grin>
There appears to be 2 schools of thought on this subject: a classic case
of the "old" vs the "new", in this case "punchcards/slow compilers" vs
"gcc/all-in-one compile, link and go"of todays tech. I saw a similar
conversation about 5 years ago on the linux lists... :)
I didn't want to complain about using a test case,
with determined variables (relative path vs. absolute
path) to see if the interpretation of "man 2 access"
was matching the actual inner workings of the function
in use. In fact, I would even judge this the _preferred_
method to be sure.
In the light of this conversation and given todays tech I'd say give it
a shot unless you think something could break (as in fatal to service
quality in production/hardware).
Fully agree. Know your variables and construct a
test within a fixed environment. The result will
be a valid source of conclusion.
Now back to "trial& error": what if I use
brackets instead?
void *foo(int blah, void *meow[])(int ouch);
Hmmm... :-)
I think the problem these days is a combination of many things.
Firstly, in the old days (I sound like grandpa... :/ ) punch cards were
hard to do, time consuming, and machine time was very expensive. So
programmers had to get it right the first time (or close to it), and
documentation was paramount.
Old man want history? Read this! :-)
http://www.columbia.edu/cu/computinghistory/fisk.pdf
That was a good read- funny too. Its amazing to think how far it has all
come in such a short time... All that happened before I was even a
thought, and when my mum was a little one, yet it has always fascinated
me how things were done then.
If had the space and money I'd love to get a hold some of that old stuff
to play with or just display. I picked up an old Amiga, a Tandy, an
Apple II, and a Commodore 64 for that purpose. I lost them through some
disasters, but I did like that kind of "antiquing".
In ye olden tymes, you could measure IT efficiency (even
though the term was probably quite different) in megawatts
per square foot, or even $ per square foot. This kind of
measuring "expensive machinery" (in terms of operating
them) has become present in our modern times again. And
documentation... well, that depends.
Amazing how history keeps repeating... :)
Also amazing how the old suddenly comes to the rescue again. Case in
point: NASA didn't take very good care of data collected during its
missions. For eg the Australians came to the rescue with footage of the
moon landings- NASA recorded over it. That'd be like recording over your
wedding video, right? There musn't be a woman in NASA's admin then... :)
But more importantly, what happened when they decided to go to the moon
again and they hunted around for necessary data to help them prepare?
They found that the data of say dust levels was kept on old tapes that
hadn't been transferred to new medium, and so was still kept by the
scientist in charge of that in his garage- with no means to retrieve the
data recorded. Now they're spending millions to restore a 40-50 year old
tape drive!
However, obtaining
learing resources for efficiently _using_ what's available
have become much more easily to access today, primarily
because of the WWW. As failing to properly program does
not turn into accumulating costs ("charged per CPU time")
right away, you luckily don't have to pay that much
attention when you perform "learning by doing", which
in my opinion is the _only_ way to learn "IT stuff"
that works.
That was something that was mentioned in that article you posted. Things
really picked up speed when the cards were replaced by the new, more
"direct" methods. Now its takes only minutes. You really do wonder how
they managed to do it... They really deserve a lot of respect for
getting unix going and keep it going.
Secondly, in the early years the internet wasn't exactly up and running
(as such), and so global programming teams weren't a problem with
language differences (and people were taught far better english and
speling- whoops spelling :) none of this and other shortenings;
ambiguity kept to a minimum).
The ability to use the english language was neccessary
in the earlier days, especially when 8-bit microcomputers
became available nearly everywhere. Internationalization
and localization wasn't done. CP/M messages and BASIC
keywords were all english. Whole programs such as WordStar
were used in their original (english) language by people
speaking a different language (e. g. german), still being
able to produce excellent work. Looking back at those times,
I think the language barrier is much stronger present in
our today's society than it was in the past. But maybe
that's just my individual observation here in Germany. :-)
I suppose thats true, but to continue in that fashion would mean the
death of variety in languages and new dialects. It would also alienate
others. But, worst of all, to destroy the world with such a coarse,
illogical, bastardised language as english would be a complete travesty!
I'm just in the process of working out exactly how to teach reading,
writing, spelling and grammar to our kids (1-5), and we're just finding
out exactly how bad it really is. We knew, but we were horrified by the
extent of it.
Thirdly, when things did become easier (gcc era?) the documentation
slipped, and programmers started getting more sloppy, as the mistakes
were easily fixed.
Compile modern software with -Wall and see the results
of "more sloppy". :-)
Tell me about! I see it when compiling ports, and I don't mind so much
if its a difference of arch's or maybe a deprecation or two, but most
are just sloppy casts and the like. Surely that _has_ to affect the
running of the program, creating unexpected behavior? Again that article
was funny how the error messages came back and he's wondering why it
didn't fix itself- unfortunately these days it seems it does, and
everyone relies on the spell-checker... ergo we have documents _AND_
programs going out with speling erors ;)
The docs became more ambiguous, and language did
start slipping (globally- not just in computing).
In the past, those who provided software typically
also provided documentation. And those who provided
hardware also did. Today, documentation is typically
left to others, to the users, the communities, and
it is scattered across the web, into Wikis, web forums,
individual pages. The ability to use a search engine
has become mandatory. Software engineering strategies
that emphasize the _fast_ production of software seem
to judge documentation as optional, consuming resources
that could be spent better - and why not? When the
documentation is complete, the product it belongs to
has already been obsoleted and withdrawn.
True again, but shouldn't the "core" principles of the program be
intact? Proper use of MVC should ensure that so that the user docs
should be fine, and the as for the code itself the engineers should be
better able to articulate what is going on? I looked at java and its
documentation and it really was a clever design: surely other coding
should be able to do the same?
Fourthly, globalisation occured, internet was up and running on a global
scale, international teams were working on programs, and people were
attempting to translate japanese manuals into english (if you catch my
drift... :) I used to be a Xeroid and this was a standing joke). So not
all docs were as clear anymore.
It's worth noting that the _means_ of documentation
production have vastly improved (authoring systems,
text processors, use of graphics and so on), while
the quality of documentation produced that way does
not always have.
Setzen Kopfphon in Kopfphon Wagenwinde ein, gemappt
die Pfeife lange wie die Form B. :-)
I'll have to get you translate that for me- I have an idea, but I'd like
to know for sure. Its kind of like only getting half the joke atm :)
Lastly, we have the travesty of a lack of discipline in skills. Near
enough's good enough, and so on. No one is taking the time anymore to
become "skilled" - they want it now or never. Take a 6 week course and
become an expert. The masters and gurus are becoming few and far between
now (although there appears to be a nice concentration here- thats why I
stick around. Linux lists seem to have the cranky ones :) ). And so we
have the case as you have outlined Poly. That said the docs are getting
to be of not much help either unless you're partly clairvoyant too in
more cases than should be.
A big step in achieving to be a "skilled master" isn't
just bare knowledge, it's experience. And this requires
time. Nobody is willing to spend time in order to get
experience. Knowledge... well, you can easily obtain
knowledge today by "only" knowing how to properly search
for (and _find_) it. And for sure, you need to know how
to interpret the knowledge you find. But without experience,
what is knowledge worth? Knowledge without application
is ballast. On the other hand, knowledge is needed in
order to understand what's going on - especially in cases
where you're _supposed_ to know it. And by _using_ that
knowledge, you gain experience. In my opinion there is
no other way to gain it.
I can speak from experience with exactly that. Hands on is the only
way... those who can't do teach, and those who can't teach become
university professors (at least here anyway) :D
People make mistakes. And that's no problem as you can
learn from mistakes. Of course, you cannot do _all_ the
mistakes possible, so when you can, learn from other's
mistakes. But for a learning experience, always make
your own mistakes. No one is born a master.
Myself I believe that one needs to read the docs thoroughly and then if
it is ambiguous then run a test case, if all else fails: ask.
Exactly my suggestion.
But one
needs to be as exact as possible when doing anything.
That's what you learn in science theory 101: Determine
your variables as strict as possible. Change _one_ thing
per time, so you can conclude by observing your results
(that have changed, _if_ they have changed). Formulate
your algorithm to "answer a question" as precise as
possible, therefor: Know your question.
I see many programs out there that _don't_ do exactly that. They don't
get the parameters and/or couldn't be bothered to implement them
correctly. The ones where the developers do "know" work far better. See
Bind and DHCP.
"Any job worth
doing is worth doing properly", and "god/devil is in the details" - Is
say "God _and_ the devil is in the details": if you don't pay attention
to the details the devil _will_ make sure it bites you in the ass!
Details always matter. In small scale, when you write a
C program and miss a *, the whole program can do something
totally different, or even doesn't compile anymore. In
large scale, if you deal, for example, with database
request, be sure to do it _properly_ to get the results
you want. Only the correct results are the results you're
interested in - or you would be querying /dev/random instead
without the need of a database. :-)
This little thing hasn't changed in over 50 years that
computers are around. Many things have changed - but
details _still_ matter. Die, history, die!!! :-)
Its a crazy world, though, isn't it? :)
It may belong to Arthur Brown. :-)
Yeah, it just might be...
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"