On Thu, 16 Mar 2000, Greg Hayes wrote: > > But, regarding that, then is linuxconf suppose to be "the GUI tool" > > then? If GUI is needed for admin, what tools do you trust enough to > > justify the need GUI? > > There are multiple "GUI" administration tools besides linuxconf. YaST > (from SuSE) and the tools provided with OpenLinux, etc. are also quite > nice. The point is, there is no need to insist that X is NOT part of > what should be considered the standard base, that idea in its entirety > is an attempt to halt the evolution of Linux as an OS.
Ok, yes, and no. GUI administration tools are a tangent topic, so to be contstructive, I think we need to refocus back on X itself. "there is no need to insist that X is NOT part of what should be considered the standard base" is fundementally flawed IMHO. Again, I say, X is not an essential part of Linux. That in itself is the enough reason to say it should not be part of "a standard base." The discussion has started to evolve into a conflict of defining the terms. What is "base" and what isn't? If base is defined to be the complete set of all standards when the LSB spec is finished, then sure, X should be in it. If "base" is what the LSB will define as a "minumum set required for compliance" than X should in no way be considered base (again, IMO). So, as such, to discuss it further, the term "base" must be defined, and I would propose that the term "layer" be used to discribe something, but not only as an discussion term, not a defining term. The layers can be correctly called "Compliance Levels" for the sake of defining LSB compliance. So, the "base" would be a "level one compliance" and the addition of X could be "level two compliant." (and then, a "level zero" could nicely fit as kernel+shell, and be a subset to make everyone happy). Again, this will have to be carefully addressed from the point of vocabulary, because "Level 2 compliance" will only be part of a moving target spec, that will likely also need to include "versions" as libs and kernels update. If a new kernel (say X.X.35) in no way breaks backwards compatibility with an old kernel (say X.X.30), a distribution could easily use a .35 kernel and still conform to the LSB version that specifies kernel X.X.30. This frees the LSB from releasing a new "version" of compliance with every change to the kernel. Similar logic should be applied to libraries as well. So, I will not deny that there are two seperate systems that must be accounted for in the "seperation of variants" for LSB compliance. But, they are logical seperations, which help in the deligation of tasks and project management for the LSB, so it's not creating new "problems." It is in fact making the structure of the LSB solid enough that it can continue to grow well into the future. The issue then is, how nicely can the "levels" and "versions" be packaged so that ISVs and Distributions (as well as users and adminstrators) can understand and use them. Frankly, I don't see this as a big problem either, if you can grasp the consept of a video game, you can grasp this consept as well... Go to a new "level" and get more goodies... Get a new version, and it will probably be better, more stable, and spiffier than the old version. > I think you must be referring to VERY small companies. Granted, I work > in global operations for a Fortune 5 company, but I still believe that > HD space is very affordable. It is impossible to even purchase a desktop > with less than 10 G or so of HD now. With 20 G HD's around $300 US I > seriously doubt that any company that sees even modest profitability > would have trouble meeting this negligable ammount of storage space. The price of a hard drive is not something that should be a factor. Tying the LSB to any hardware, no matter how you justify it, is not a good idea (IMHO). Too many hardware "what if" situations exist. (Static RAM Drive technology, diskless systems, read only systems). Hardware development is dependant OS's, and defining a Linux base that limits hardware development to systems with hard drives is just short-sighted. (I am aware of several commercial products in development that do use ROM devices and Linux, but I'm not at liberty to discuss them, and I am quite sure that those projects are not the only ones in the world, just the very few cases I am aware of). I just can't buy into the "Well a hard drive is cheap, so the spec can be big" argument. > Again, I must decent. Lets have a spec that exists for what is there > now. Sure, no reason not to. But, "what is there now" is more than big brand new shiny servers and workstations. And for those big brand new shiny systems, there is no reason that a "level 2" spec shouldn't be perfect for them to comply to. Heck, build the layers on up to a "Level 10" spec that all the 20G+ systems can fill there hard drives with for all I care. I just think it is short sighted and unjustifably bloated to consider only the biggest best newest systems when defining Linux. Linux strength (as a kernel) is it's vast ability to scale and configure. The LSB shouldn't be comming along and defining a standard that ignores a fundemental strength in Linux itself. > This seem to make more sense and be a lot less confusing than > informing individuals that "just about anything" can be linux. As far as > the base being inflexable, I believe that everyone of us that have > contributed to its deffinition have made certain that it contains a > reasonable ammount of flexability. The problem is, what happens if you > make the spec too flexable? The answer is simple, no one has a good clue > about their target and generally don't even bother with a port. Not flexable as in "we're going to make it so that everyone is going to be happy with it." It should be scaleable and logical, so that everyone can understand it and use it.
