On 10/18/2013 12:11 PM, Miles Fidelman wrote:
Jerry Stuckle wrote:
On 10/17/2013 3:57 PM, Miles Fidelman wrote:
berenger.mo...@neutralite.org wrote:
<snip>

Do you know how the SQL database you're using works?

Sure do.  Don't you?


I know how the interface works.  Actually, I do know quite a bit about
the internals of how it works.  But do you know how it parses the SQL
statements?  Do you know how it searches indexes - or even decides
which (if any) indexes to use?  Do you know how it access data on the
disk?

Kinda have to, to install and configure it; chose between engine types
(e.g., INNOdb vs. ISAM for mySQL).  And if you're doing any kind of
mapping, you'd better know about spatial extensions (POSTGIS, Oracel
Spatial).  Then you get into triggers and stored procedures, which are
somewhat product-specific.  And that's before you get into things like
replication, transaction rollbacks, 3-phase commits, etc.


Which has nothing to do with how it works - just how you interface to it.

Umm, no ISAM and INNOdb are very much about internals.  So are the
spatial extensions - they add specific indexing and search functionality
- and how those are performed.

Sure.  But do you know HOW IT WORKS?  Obviously, not.

Probably even more than that.  For a lot of applications, there's a
choice of protocols available; as well as coding schemes.  If you're
building a client-server application to run over a fiber network, you're
probably going to make different choices than if you're writing a mobile
app to run over a cellular data network.  There are applications where
you get a big win if you can run over IP multicast (multi-player
simulators, for example) - and if you can't, then you have to make some
hard choices about network topology and protocols (e.g., star network
vs. multicast overlay protocol).

If it's the same app running on fiber or cellular, you will need the
same information either way.  Why would you make different choices?

And what if the fiber network goes down and you have to use a hot spot
through a cellular network to make the application run?

But yes, if you're using different apps, you need different
interfaces.  But you also can't control the network topology from your
app; if it depends on a certain topology it will break as soon as that
topology changes.

If I'm running on fiber, and unmetered, I probably won't worry as much
about compression.  If I'm running on cellular - very much so.   And off
course I can control my network topology, at least in the large - I can
choose between a P2P architecture or a centralized one, for example.  I
can do replication and redundancy at the client side or on the server
side.  There are lots of system-level design choices that are completely
dependent on network environment.


No, you cannot control your network topology. All you can control is how you interface to the network. Once it leaves your machine (actually, once you turn control over to the OS to send/receive the data), it is completely out of your control. Hardware guys, for instance, can change the network at any time. And if you're on a TCP/IP network, even individual packets can take different routes.

And what happens when one day a contractor cuts your fiber? If you're dependent on it, your system is down. A good programmer doesn't depend on a physical configuration - and the system can continue as soon as a backup link is made - even if it's via a 56Kb modem on a phone line.




For now, I should say that knowing the basics of internals allow to
build more efficient softwares, but:

Floating numbers are another problem where understanding basics can
help understanding things. They are not precise (and, no, I do not
know exactly how they work. I have only basics), and this can give you
some bugs, if you do not know that their values should not be
considered as reliable than integer's one. (I only spoke about
floating numbers, not about fixed real numbers or whatever is the
name).
But, again, it is not *needed*: you can always have someone who says
to do something and do it without understanding why. You'll probably
make the error anew, or use that trick he told you to use in a less
effective way the next time, but it will work.

And here, we are not in the simple efficiency, but to something which
can make an application completely unusable, with "random" errors.

As in the case when Intel shipped a few million chips that mis-performed
arithmatic operations under some very odd cases.


Good programmers can write programs which are independent of the
hardware.

No.  They can't.  They can write programs that behave the same on
different hardware, but that requires either:
a. a lot of care in either testing for and adapting to different
hardware environments (hiding things from the user), an/or,
b. selecting a platform that does all of that for you, and/or,
c. a lot of attention to making sure that your build tools take care of
things for you (selecting the right version of libraries for the
hardware that you're installing on)


Somewhat misleading.  I am currently working on a project using
embedded Debian on an ARM processor (which has a completely different
configuration and instruction set from Intel machines). The
application collects data and generates graphs based on that data. For
speed reasons (ARM processors are relatively slow), I'm compiling and
testing on my Intel machine.  There is no hardware dependent code in
it, and no processor/system-dependent code in it.  But when I
cross-compile the same source code and load it on the ARM, it works
just the same as it does on my Intel system.

Now the device drivers I need are hardware-specific; I can compile
them on my Intel machine but not run them.  However, that's because my
Intel machine doesn't have the necessary hardware, not because of any
processor limitation.

I didn't say "processor dependent," I said "hardware dependent." Are not
device drivers and compile-time switches part of programming to you?


The processor isn't part of the hardware? And no, device drivers are NOT part of the application programming. There are also no compile-time switches involved. I can compile the same code with the same switches for Intel on an Intel machine, and on the ARM for an ARM machine.

It's just much slower to compile on the ARM.



Running cross-platform, and hiding the details from an end user are
things a good programmer SHOULD do (modulo things that SHOULD run
differently on different platforms - like mobile GUIs vs. desktop
GUIs).  But making something run cross-platform generally requires a
knowledge of the hardware of each platform the code will run on.


The only real difference between mobile and desktop GUIs is screen
size (in pixels).  But with all the various sizes out there (including
on desktops), a good program should always check the screen size
anyway, and adapt accordingly.

And load time, which is dependent on network characteristics (and to a
degree CPU power, though less so these days).


How many applications actually load off the network? Even in mobile, they load from the local system. And there, load time is governed by program size (including necessary libraries), available CPU cycles and disk access (and to a certain extent available memory). A fast desktop running 99% CPU will load a program more slowly than a cell phone doing nothing else.

However, that is immaterial, anyway, because application load time is outside of the application's control, with the exception of the size, of course.

Yes... a good program should check on screen size and resolution, and
network characteristics, and adapt.  Progressive rendering is also a
good thing, particularly in a mobile environment.  As I said, writing
cross platform code requires testing for hardware differences and
adapting to them - NOT writing one-size-fits-all code.


But the only hardware differences you need to check on here is the screen size (well, maybe color depth, but who does that)? And that should be done on for any graphical application anyway, due to the vast differences in screen sizes even on desktops and laptops.



But now, are most programmers paid by societies with hundreds of
programmers?

Really depends on the industry (and whether you actually mean
"developer" vs. "programmer").  But even in huge organizations, people
tend to work on small project teams (at least that's been my
experience).


It depends on the project.  I agree most people work in small teams -
but there are two reasons.  First of all, there are many, many small
projects and only a few large projects.  And even large projects are
typically broken up into more manageable pieces. I've worked on a
couple with 100+ programmers.  But they were divided into groups of
about 8-10 programmers and given a piece of the project.  That way not
everyone had to concern themselves with everything in a 200+ man-year
project.


Not at all.  When you work for a company, especially larger ones, you
use what you are given.  And many of those programmers are working on
mainframes.  Have you priced a mainframe lately? :)

Never seen a single one, to be exact :)

Yes to "you use what you're given" but as to what people are given:

I would expect that most are NOT working on mainframes - though where
the line is drawn these days an be arguable.  A high-end modern laptop
probably packs more memory and horsepower than a 1980-era mainframe. And
some of the larger Sun servers

I would expect a LOT more programmers are working on high-end SPARC
servers than mainframes.  Heck, even in the IBM world, I expect a lot
more code is written for blade servers than mainframes.


You'd be surprised.  There are still a huge number of programmers out
there working on mainframes (most large corporations still use them).
Sure, they typically are using PC's - but they're using them more as
terminals than programming platforms.  All of the development,
compiling, etc. are done on the mainframe(s).

I know there are still people working on mainframes, and IBM still sells
them.  But there are a lot more server and desktop class machines out
there these days.

Sure there are. But each mainframe supports thousands of users; each server supports a few to maybe 100, and each desktop supports one.

There is a reason mainframes are still popular. They are still the cheapest way to support large numbers of users.


And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows machines
(not that I like it, but it does seem to be a fact of life).  A lot of
people also seem to be writing stored SQL procedures to run on MS SQL.


Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).

I didn't say MySQL, I said MS SQL (as in MicroSoft SQL server) - which
seems to be in every RFP I see these days.  Definitely Oracle and DB2
are up there - though again, I'd expect most of that Oracle is running
on Sun servers, not big iron.

I know you said MS SQL.  And I said it's way behind MySQL in installations.

And Oracle is quite popular on big iron.


I expect that there are NOT a lot of people writing production code to
run on Debian, expect for use on internal servers.  When it comes to
writing Unix code for Government or Corporate environments, or for
products that run on Unix, the target is usually either Solaris, AIX
(maybe), and Red Hat.


I would say not necessarily writing for Debian, but writing for Linux
in general is pretty popular, and getting more so.

Absolutely.  I'd expect most of that is targeted at Red Hat and to an
extent Ubuntu.


What's the difference between Ubuntu and Debian? Not much. And Red Hat isn't much different, either.

Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52616716.6010...@attglobal.net

Reply via email to