Re: Future of linux-image-grsec-* packages

2017-08-29 Thread Adrien CLERC
Le 29/08/2017 à 14:51, Mario Castelán Castro a écrit :
> I suggest you write to the maintainer of that Debian package.
Thanks for the suggestion. In the meantime, I found a bug report that I
missed : https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=867166

This is give me the exact answer I need.

Adrien



Re: Future of linux-image-grsec-* packages

2017-08-29 Thread Mario Castelán Castro
On 29/08/17 02:22, Adrien CLERC wrote:
> Hi,
> 
> Since the announce of grsecurity to go to a complete non-free (as in
> beer) model (see https://grsecurity.net/passing_the_baton.php), I was
> wondering if there is any future for those packages.
I suggest you write to the maintainer of that Debian package.

-- 
Do not eat animals, respect them as you respect people.
https://duckduckgo.com/?q=how+to+(become+OR+eat)+vegan



signature.asc
Description: OpenPGP digital signature


Future of linux-image-grsec-* packages

2017-08-29 Thread Adrien CLERC
Hi,

Since the announce of grsecurity to go to a complete non-free (as in
beer) model (see https://grsecurity.net/passing_the_baton.php), I was
wondering if there is any future for those packages.

I am really grateful for the maintainer who did this. This is a great
job, since it allowed me to easily switch to a hardened kernel without
pain. However, it seems that no upgrade will ever be available in Debian.

For now, it is not a real issue, as I can stick to the latest release
(4.9.18 as of
https://packages.debian.org/sid/linux-image-4.9.0-2-grsec-amd64). But if
there is more information on that, I'll be glad to hear it.

Please CC me, I am not subscribed on this Debian list.

Adrien



Re: Future of Linux Question

2004-01-24 Thread Paul Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, Jan 22, 2004 at 04:04:50PM -0600, [EMAIL PROTECTED] wrote:
> Why doesn't someone develop a similar protocol to Microsoft's network 
> neighborhood and smb for Linux. 

Well, all SMB does is handle network file systems and network
printers.  Both were problems solved earlier and better by lpr and
nfs.  See also:  NetBOLLUX.

> So when you join a NIS like system that 
> it will automatically authenticate you  on your Linux network with your 
> currently logged in user name and password.

That's exactly what hosts.equiv and identd are for.  You set your
local network in hosts.equiv; and anything in hosts.equiv is assumed
to respond with a valid, authenticated user.  Keep this in mind,
because identd spoofing will leave you totally open from all hosts in
/etc/hosts.equiv.

- -- 
 .''`. Paul Johnson <[EMAIL PROTECTED]>
: :'  :
`. `'` proud Debian admin and user
  `-  Debian - when you have better things to do than fix a system
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFAEhx0UzgNqloQMwcRAlX+AKCQXy6JH2KrXVAFmXXWlbF7g1HoSgCdETTB
SdDAZnrppPwqIlzn3UuCqBs=
=TdwK
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Future of Linux Question

2004-01-22 Thread Todd Pytel
On Thu, 22 Jan 2004 16:04:50 -0600
[EMAIL PROTECTED] wrote:

> Why doesn't someone develop a similar protocol to Microsoft's network 
> neighborhood and smb for Linux.  So when you join a NIS like system
> that it will automatically authenticate you  on your Linux network
> with your currently logged in user name and password.  This way people
> that are accustomed to using Microsoft networking could just migrate
> over with a similar path.  For users that are going to be desktop
> users they are going to rely on a gui front end with something like
> network neighborhood. Please let me know what you guy's think about
> this.

Well, as has been stated, LDAP and Kerberos provide a single-sign-on
environment for *nix networks, though they're quite complicated for
small outfits. As for lack of a "Network Neighborhood" GUI, I think that
part of the reason is a conflict with some deep Unix philosophies. Unix
grew up in the world of the big iron server and small, even dumb,
clients. Its networking systems are designed so that there are few
network resources, but those resources are integral to being on the
network at all (NFS-shared user /home's, for example). Windows was born
on desktops, and never really left them in philosophy. You need a
network browser for Windows because its natural environment is one with
many resources spread across relatively powerful desktops. Very
different roots, and thus rather different technologies built upon them.

-- 
Todd Pytel


Signature attached
PGP Key ID 77B1C00C


pgp0.pgp
Description: PGP signature


Re: Future of Linux Question

2004-01-22 Thread Thorsten Haude
Hi,

* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2004-01-22 23:04]:
>Why doesn't someone develop a similar protocol to Microsoft's network 
>neighborhood and smb for Linux.  So when you join a NIS like system that 
>it will automatically authenticate you  on your Linux network with your 
>currently logged in user name and password.  This way people that are 
>accustomed to using Microsoft networking could just migrate over with a 
>similar path.  For users that are going to be desktop users they are going 
>to rely on a gui front end with something like network neighborhood. 

For far too long Microsoft has been telling people that you could have
both security and convenience with all things.


Thorsten
-- 
Sometimes it seems things go by too quickly. We are so busy watching out for
what's just ahead of us that we don't take the time to enjoy where we are.
- Calvin


pgp0.pgp
Description: PGP signature


Re: Future of Linux Question

2004-01-22 Thread Alex Malinovich
esOn Thu, Jan 22, 2004 at 04:04:50PM -0600, [EMAIL PROTECTED] wrote:
> Why doesn't someone develop a similar protocol to Microsoft's network 
> neighborhood and smb for Linux.  So when you join a NIS like system that 
> it will automatically authenticate you  on your Linux network with your 
> currently logged in user name and password.  This way people that are 
> accustomed to using Microsoft networking could just migrate over with a 
> similar path.  For users that are going to be desktop users they are going 
> to rely on a gui front end with something like network neighborhood. 
> Please let me know what you guy's think about this.

This is already pretty much possible when using LDAP. It's not quite
as integrated as Samba, but it is a lot more robust and extensible.

-- 
Alex Malinovich
Support Free Software, delete your Windows partition TODAY!
Encrypted mail preferred. You can get my public key from any of the
pgp.net keyservers. Key ID: A6D24837


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Future of Linux Question

2004-01-22 Thread David . Grudek
Why doesn't someone develop a similar protocol to Microsoft's network 
neighborhood and smb for Linux.  So when you join a NIS like system that 
it will automatically authenticate you  on your Linux network with your 
currently logged in user name and password.  This way people that are 
accustomed to using Microsoft networking could just migrate over with a 
similar path.  For users that are going to be desktop users they are going 
to rely on a gui front end with something like network neighborhood. 
Please let me know what you guy's think about this.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Future of Linux (.so contracts)

1998-11-21 Thread Bruce Stephens
Davide Bolcioni <[EMAIL PROTECTED]> writes:

> The concept seems very interesting to me, although I wonder if it is
> within the scope of LSB; I had the notion that its effort was
> concerned with standardizing existing development approaches.

Probably it's not [it being TenDRA].  On the other hand, having
available tools for checking conformance with LSB would be valuable.

> On the other hand, however, the above approach as far as I can see
> does not address one of my major concerns, namely the fact that in a
> library function there is more than the signature.

Absolutely.  As a trivial example: what does malloc(0) return?  I
suppose what my real point was: if the open source culture was such
that it was normal to provide a reasonably abstract declaration of
APIs to libraries, then probably people would want to specify their
semantics too.  (But the semantics would be informal, I suspect.
After all, how could it practically be otherwise?)

I'm not really suggesting that TenDRA provides anything really
compelling.  It provides a syntax that's a bit more abstract that
ordinary C header files, and some nice tools for fiddling with these
files.


Re: The Future of Linux: 'real' Locale support from X libs or no?

1998-11-17 Thread Christopher Hassell
Okay, I just got a bit more info from our main locale-issues developer 
(Jon Trulson):

   The multi-byte functions *are* there in glibc.  He knows they are there.
   They just are not reliable enough, powerful enough, to stick with in our 
   new products.  (i.e. setlocale() doesn't apparently do all that's needed).

   All he wants is *those* (the ideal is UnixWare) and everything else 
   (related to catgets for example) he's willing to handle internally.

   I'll try to get more specifics nailed down and then take discussion to 
   another list if anything is left to discuss.

I suppose after this is solved, the questions become these:

1) deadkey ("compose") support ala Solaris, 

This is almost precisely related to a "standardized" core xterm/cxterm... 
outside of
the code page support that exists for VTs.  The UNIX standard environment is 
pretty much dependent on that face of a system.

2) noting the standard X fonts (like the API) for languages

3) determining widget support standards ala Motif/gtk.
4) determining WP/editor support standards ala Motif/gtk.

I'll attempt to look into gtk, myself, and see what may be needed.


Re: The Future of Linux: 'real' Locale support from X libs or no?

1998-11-17 Thread Christopher Hassell
At the risk of reviving a very quickly-quiet thread...  I've still an 
interest and have acquired some opinions around our software house.

On Thu, Nov 12, 1998 at 01:37:05PM +, Alan Cox wrote:
]>Glibc is good, but what about wide char, unicode etc.. etc.. etc.. ad biggum.

] Glibc does wide char, ncurses seems to imply it does (I've not 
] checked yet). 

Wide Char is good for Motif and several other apps.  The main things I've heard
are that 

1) We haven't tested against a widechar which solves all our problems
2) *MOST* of what's needed has to do with Unicode, Multibyte or other
Asian-font/encodings, ones outside standard ISO8859.

The general idea (I'm representiong our CDE developer here) is that widechar is
an internalized format, one made for keeping some *specific* language data
encoded for internal ("strlen, strcmp" etc..) use.  It is *not* however an
actually international solution.

Now one can easily look at the widely international nature of Linux and see
that ISO8859 (ASCII+european-accented-roman+updowncases-of-3-new) has done
almost all the work.

Chinese, Japanese, Russian, Arabic, Israeli, Indian people simply use English,
or a rather unhelpful bastardization of simple ASCII (Some Chinese put "1-4" 
to indicate tonality of their roman-spelled syllables).  Even Greek isn't
supported except with a font-replacing-ASCII-chars.

The question is, therefore, whether work is being done to get a good
standardized mapping from any of the above sets into a narrower widechar.
Jon Trulson, the CDE man in our office, says that he only wants certain 
specific calls that allow quick manipulation of multibyte.  Even a 
reasonable mapping system may start to qualify Linux as an Earth OS. |->

] > toward.  Is there any interest in what we have thus far at Xi?

] Well I know the currnt KDE doesnt handle 16bit Glyphs, Im not sure about
] the Gtk toolkit on that.

No idea, myself.

] > (hint to some: code pages work only for vts)

] Depends on your Xterminal and fonts ;)

I will say this: maintaining an 8bit string system makes a bleeping lot of 
sense.  All that's needed is mapping at any user interface.  Presenting easy 
ways to 1) get the right glyphs to your display and 2) input horridly 
obscure ideograms via simple means and 3) get them changed into a unicode or 
other highly-interchangible sequence of bytes... would make developers sigh
with relief.

Now, again, X is the one environment which is really able to do 1) and 
handles 3) for us within our (recently obsoleted) "Xintl" library.  The 
ability to present all three would be very very attractive and could keep 
Linux as the "sensible" alternative instead of the 
syrupy-sweet-costly-kludge of MS.

] The kernel itself uses UTF8 for file names so you can reasonably keep
] a Klingon ext2fs if you wish.

8-o.  You are a sick man.  :-B.  That makes an interesting mental picture:
"Ka'plach% bash -xv MyEnemiesHead | ./meatGrinder > aStakeOfVictory &"
"Ka'plach% kill %1 "
"Ka'plach% kill -9 %1 "
"Ka'plach% kill -WithExtremeViolence %1 "
"Ka'plach% shutdown -r now 'I must crush this defiant process'"
"Ka'plach% @[EMAIL PROTECTED]()*$&@)( "
"Ka'plach% sync ; sync ; sync"

Okay... Oracle humor over.


Re: Future of Linux (.so contracts)

1998-11-17 Thread Davide Bolcioni
Christopher Hassell wrote:
> 
> On Thu, Nov 12, 1998 at 10:11:31AM +0100, Davide Bolcioni wrote:
> ]
> ] > ...
> ] > This is so that every app doesnt install their own version of python
> ] > "just in case". That could be extended to all interpreters and some
> ] > libraries probably and a farmed out approach would IMHO be good.
> 
> ] This is a notion of "software contract", if I understand correctly: a
> ] tool intended to be used by other programs (interpreter, library) needs
> ] a specification which should be adhered to in releases after the one I
> ] built my application against (CORBA or Eiffel experts might have more to
> ] say about this).
> 
> Also known as the "*.DLLs that you shouldn't have uninstall'ed!" problem.
> (If software has its own version of Common-Use stuff, should it overwrite?
> If it is un-installed, should it delete its own stuff?  Should it squirrel
> away a copy of the old stuff?  Should it ask you arcane questions like "Do
> you want gtk-1.5 or gtk-1.6.1?").  All of those go along with the "software
> contract" or a platform query.
 
> I am in favor of RPM as a common-use package system if maybe *ONLY* because
> it promotes a widely-accepted versioning scheme and even pretty reliable
> distributions.  Sending along "stock" rpms with your package would be a
> *very* nice and simple deal.  Relocatible packages would allow use of your own
> dirs if you must use an old/too-new version.

I wonder if the following approach would be acceptable: instead of the
foo package depending on libbar, depend on foo-provided libfoobar and
provide two conflicting libfoobar packages, one which just symlinks with
the existing libbar and one which squirrels away its own copy somewhere
and symlinks there. Some care may be needed for filesystems which are
partitioned (e.g. I personally have /boot, /tmp, /usr and a couple
others all on different partitions, some with md, and experienced minor
glitches with some packages, especially at boot time).

> Right now our CDE sends along a *lot* of common/free shared libs.  It is not
> clear (at least not until we get our new RPM deliverables out) whether we
> should just assume a NewerVersion is better or not.  Also, should we set
> aside the libraries we find or should we wipe 'em out (because we've tested
> against our own compilations).

IMHO, you should leave that decision to the package manager (the "rpm"
program in case of RedHat); if you "set aside" (?) or wipe out newer
libraries installed on the system, an existing application relying on
these libraries might fail (a Windowish behavior which, as a sysadm, I
personally would not appreciate). Replacing an older version is more
acceptable because it is the standard behavior everybody is used to,
given user confirmation.

> Any comments on freeish library/script-language package compatibility?
> I think that's the main issue to conquer: tests for determining if a working
> version of XYZ script or a valid version of jpeg3d.so.84.3.0 are out there.
I believe that developers of both interpreters and libraries would
welcome test suites or additions to such, especially from a third party
which might point out something they did not think about, and would be
happy to include them in subsequent releases. I thought LSB had
something like this in mind for libc.
 
> ] In order to attract ISVs, I think such a scheme needs to work very well
> ] especially in the case of libraries, as it is quite easy to link
> ] statically, disk space is cheap and resource consumption is not my
> ] application's problem (this "single application" mentality is something
> ] which might need consideration). Interpreters typically are (and are
> ] perceived as) system-wide entities.
> 
> Libraries are also system-wide.  Some features appear *because* of new
> libraries (jpeg/tiff/png/gtk/imlib) and updating a shared library might be
> *the* one way to make it more stable.  Note also that LGPL requires that any
> recipient be able to  *re-link* against newer LGPLed libraries if
> proprietary code uses one in the first place.  If not, then you're not
> making an LGPL-compliant link and -all- the source code should then be GPL.
Agreed.
 
> If you want an example of *very* important libs, take the hybrid: tk and tcl.
> They are both libraries and a scripting language.
> 
> (deleted segment...)
> ... (standardized library names, interpreter names) ...
> ... (quantifying changes (improvements/alterations) to an API ...
> 
> Services offered by a lib should be compat-backwards and changes orthogonal
> IFF the version number is within a linkable distance... tho I don't know about
> trying a two-tiered link version system.  (i.e. link against libleep.so.N
> and try to load libleep.so.N.N (a different linknamed library)) dynamically
> via symlink.
Agreed, the GCC-HOWTO says that current practice is to link with
libfoo.so, which actually becomes a link against soname=libfoo.so.3
because that's what's on the system the link occurs in (following
s

Re: Future of Linux (.so contracts)

1998-11-17 Thread Davide Bolcioni
Bruce Stephens wrote:
> 
> Davide Bolcioni <[EMAIL PROTECTED]> writes:
> 
> > If we say a library is a collection of functions which have a
> > signature and an implementation, the notion of change becomes: 1 -
> > an implementation change which preserves the signature; 2 - a
> > signature change (which may be construed as a deletion followed by
> > an addition, so anybody expecting to find the old function should
> > not find it) which almost always implies an implementation change;
> >
> > The problem, of course, is that we check the signature but care
> > about the implementation (in the sense that we call a function for
> > what it does, although we should not rely on the exact means it uses
> > to get the job done). The implementation includes considerations
> > such as efficiency, i.e. application chose a function with more
> > limited functionality because it was more efficient, so
> > implementation goes into the contract in multiple ways (which is
> > inconvenient).
> 
> I'm probably going off at a bit of a tangent, but it strikes me that a
> potentially useful tool is parts of TenDRA
> http://alph.dra.hmg.gb/TenDRA/>.  TenDRA as a practical compiler
> probably isn't interesting---I suspect egcs beats it (although
> compiling with more than one compiler is useful for checking
> portability beyond gcc, of course)---from the point of view of
> checking portability, or checking signature of APIs, TenDRA provides
> some features which look nice, however.
> 
> With the TenDRA compiler, I can compile the etags.c from XEmacs, and
> be pretty sure that it requires only features (as in headers, types,
> macros, functions) provided by ISO C and POSIX:
> 
> % tcc -Yposix -c etags.c
> 
> Even if the compiler itself is ignored, TenDRA provides a language a
> little more subtle than C header files for specifying what an API
> provides.  For example, you can specify that a struct typedef has
> certain elements, but does not say which order they'll come in (and
> the compiler can check that a program does not try to assume an
> ordering).
> 
> In a sense, perhaps this is too much subtlety for programs to be
> shipped in binary: if my glibc implements some important struct
> differently to yours, then no amount of fiddling is going to get your
> binary to work on my machine.  But for checking (syntactic
> only---there's nothing about semantics involved) portability of
> source, this strikes me as useful.
> 
> Indeed, just writing down (in this already defined language) suitable
> definitions of APIs would surely be handy for a number of uses.  The
> formalism strikes me as a little clearer to read than header files, in
> that it strips out implementation details, making the interface that
> I'm supposed to use more visible.
> 
> Here's a few excerpts for apis/ansi/stdio.h, the definition of what
> ANSI C stdio.h provides:
> 
> +SUBSET "file" := { +TYPE FILE ; } ;
> 
> +EXP FILE *stdin, *stdout, *stderr ;
> +SUBSET "eof" := { +CONST int EOF ; } ;
> 
> This says that FILE is a type, but says nothing else about it.
> Similarly, EOF is a constant int.  The SUBSET things indicate that
> other APIs and other header files may reference these subsets of
> stdio.h without importing the whole lot, I think.
> 
> +IFNDEF __JUST_POSIX
> +IFNDEF __JUST_XPG3
> +TYPE fpos_t ;
> +FUNC int fgetpos ( FILE *, fpos_t * ) ;
> +FUNC int fsetpos ( FILE *, const fpos_t * ) ;
> +ENDIF
> +FUNC int setvbuf ( FILE *, char *, int, size_t ) ;
> +FUNC int vfprintf ( FILE *, const char *, ~va_list ) ;
> +FUNC int vprintf ( const char *, ~va_list ) ;
> +FUNC int vsprintf ( char *, const char *, ~va_list ) ;
> +ENDIF
> 
> Declarations of functions.  Fairly obvious, I suspect.  (~va_list is
> declared elsewhere.)
> 
> Does this kind of writing down of APIs strike anybody else as useful,
> or am I just insane?

The concept seems very interesting to me, although I wonder if it is
within the scope of LSB; I had the notion that its effort was concerned
with standardizing existing development approaches.
  On the other hand, however, the above approach as far as I can see
does not address one of my major concerns, namely the fact that in a
library function there is more than the signature. In my experience, the
function semantics is the main source of binary incompatibilities
(scenario: the documentation does not tell me enough for what I want to
do, I make tests or look at the source and develop assumptions about how
it works, write my code in a hurry, then at next release of the library
my assumptions break).
  Does anybody know of attempts to address the semantics of APIs, as
opposed to the syntax ?

[EMAIL PROTECTED]

Davide Bolcioni
-- 
#include  // Standard disclaimer applies
-BEGIN GEEK CODE BLOCK-
Version 3.1
GE/IT d+ s:+ a C+++$ UL$ P>++ L++@ E@ W+ N++@ o? K? w O- M+ V?
PS PE@ V+ PGP>+ t++ 5? X R+ tv- b+++ DI? D G e

Re: Future of Linux (.so contracts)

1998-11-13 Thread Bruce Stephens
Davide Bolcioni <[EMAIL PROTECTED]> writes:

> If we say a library is a collection of functions which have a
> signature and an implementation, the notion of change becomes: 1 -
> an implementation change which preserves the signature; 2 - a
> signature change (which may be construed as a deletion followed by
> an addition, so anybody expecting to find the old function should
> not find it) which almost always implies an implementation change;
> 
> The problem, of course, is that we check the signature but care
> about the implementation (in the sense that we call a function for
> what it does, although we should not rely on the exact means it uses
> to get the job done). The implementation includes considerations
> such as efficiency, i.e. application chose a function with more
> limited functionality because it was more efficient, so
> implementation goes into the contract in multiple ways (which is
> inconvenient).

I'm probably going off at a bit of a tangent, but it strikes me that a
potentially useful tool is parts of TenDRA
http://alph.dra.hmg.gb/TenDRA/>.  TenDRA as a practical compiler
probably isn't interesting---I suspect egcs beats it (although
compiling with more than one compiler is useful for checking
portability beyond gcc, of course)---from the point of view of
checking portability, or checking signature of APIs, TenDRA provides
some features which look nice, however.

With the TenDRA compiler, I can compile the etags.c from XEmacs, and
be pretty sure that it requires only features (as in headers, types,
macros, functions) provided by ISO C and POSIX:

% tcc -Yposix -c etags.c

Even if the compiler itself is ignored, TenDRA provides a language a
little more subtle than C header files for specifying what an API
provides.  For example, you can specify that a struct typedef has
certain elements, but does not say which order they'll come in (and
the compiler can check that a program does not try to assume an
ordering).

In a sense, perhaps this is too much subtlety for programs to be
shipped in binary: if my glibc implements some important struct
differently to yours, then no amount of fiddling is going to get your
binary to work on my machine.  But for checking (syntactic
only---there's nothing about semantics involved) portability of
source, this strikes me as useful.

Indeed, just writing down (in this already defined language) suitable
definitions of APIs would surely be handy for a number of uses.  The
formalism strikes me as a little clearer to read than header files, in
that it strips out implementation details, making the interface that
I'm supposed to use more visible.

Here's a few excerpts for apis/ansi/stdio.h, the definition of what
ANSI C stdio.h provides:

+SUBSET "file" := { +TYPE FILE ; } ;

+EXP FILE *stdin, *stdout, *stderr ;
+SUBSET "eof" := { +CONST int EOF ; } ;

This says that FILE is a type, but says nothing else about it.
Similarly, EOF is a constant int.  The SUBSET things indicate that
other APIs and other header files may reference these subsets of
stdio.h without importing the whole lot, I think.

+IFNDEF __JUST_POSIX
+IFNDEF __JUST_XPG3
+TYPE fpos_t ;
+FUNC int fgetpos ( FILE *, fpos_t * ) ;
+FUNC int fsetpos ( FILE *, const fpos_t * ) ;
+ENDIF
+FUNC int setvbuf ( FILE *, char *, int, size_t ) ;
+FUNC int vfprintf ( FILE *, const char *, ~va_list ) ;
+FUNC int vprintf ( const char *, ~va_list ) ;
+FUNC int vsprintf ( char *, const char *, ~va_list ) ;
+ENDIF

Declarations of functions.  Fairly obvious, I suspect.  (~va_list is
declared elsewhere.)

Does this kind of writing down of APIs strike anybody else as useful,
or am I just insane?


Re: Future of Linux

1998-11-13 Thread BadlandZ
On Thu, 12 Nov 1998, Andy Tai wrote:

> > Compilers are also an issue I feel strongly about.  I think gcc and egcs
> > are awsome, but no match (yet) for commercial compilers.
> 
> Don't even think of trying making some commerical compilers part of the Linux
> standard, if they's what you are thinking.   Linux is a GNU system and as 
such
> gcc/egcs has to be the standard.

No, I simply ment that gcc should be a link to cc, not assuming that all
people use gcc.  Whatever compiler the users chooses should be linked to
cc, and the standard for packaged software should look to use cc to
compile and install, not look for gcc by default.

-- 
"Robert W. Current" <[EMAIL PROTECTED]> - email
http://www.current.nu- personal web site
"Hey mister, turn it on, turn it up, and turn me loose." - Dwight Yoakam


Re: The Future of Linux: 'real' Locale support from X libs or no?

1998-11-12 Thread Alan Cox
> Glibc is good, but what about wide char, unicode etc.. etc.. etc.. ad biggum.

Glibc does wide char, ncurses seems to imply it does (I've not 
checked yet). 

> toward.  Is there any interest in what we have thus far at Xi?

Well I know the currnt KDE doesnt handle 16bit Glyphs, Im not sure about
the Gtk toolkit on that.

> (hint to some: code pages work only for vts)

Depends on your Xterminal and fonts ;)

The kernel itself uses UTF8 for file names so you can reasonably keep
a Klingon ext2fs if you wish.



Re: Future of Linux (.so contracts)

1998-11-12 Thread Davide Bolcioni
Alan Cox wrote:

> ...
> This is so that every app doesnt install their own version of python
> "just in case". That could be extended to all interpreters and some 
> libraries probably and a farmed out approach would IMHO be good.

This is a notion of "software contract", if I understand correctly: a
tool intended to be used by other programs (interpreter, library) needs
a specification which should be adhered to in releases after the one I
built my application against (CORBA or Eiffel experts might have more to
say about this).

In order to attract ISVs, I think such a scheme needs to work very well
especially in the case of libraries, as it is quite easy to link
statically, disk space is cheap and resource consumption is not my
application's problem (this "single application" mentality is something
which might need consideration). Interpreters typically are (and are
perceived as) system-wide entities.

Having LSB attack the libc problem first seems very reasonable as it is
the one library which people should hesitate to link statically against,
but maybe does not bring the notion of a contract in full light. What is
exactly the contract/specification of a .so ?

A few raw considerations on the elements of the contract subject to
change:
- the programming language, e.g. the ABI for C and C++ is different, is
typically a non-issue because it is such an obvious change;
- the library name is the primary "handle" to the library, so if the
name changes we have another library and is again a non-issue;
- the soname (as in libc.5.3.12) is the second most important handle and
is central because it is where most changes are summarized.

If we say a library is a collection of functions which have a signature
and an implementation, the notion of change becomes:
1 - an implementation change which preserves the signature;
2 - a signature change (which may be construed as a deletion followed by
an addition, so anybody expecting to find the old function should not
find it) which almost always implies an implementation change;

The problem, of course, is that we check the signature but care about
the implementation (in the sense that we call a function for what it
does, although we should not rely on the exact means it uses to get the
job done). The implementation includes considerations such as
efficiency, i.e. application chose a function with more limited
functionality because it was more efficient, so implementation goes into
the contract in multiple ways (which is inconvenient).

IMHO, the problem of the .so contract is mostly about (1) once (2) has
been straightened out, i.e. once a signature change is performed in such
a way that the old function is not found, which is an easy test which
can be made automatic (I mean, the dynamic linker already tells you if
it does not find a function, and in C++ this includes the signature
because of name mangling).

On a more concrete note, when linking against an libfoo, should I link
against libfoo.so, libfoo.so.2, libfoo.so.2.6, libfoo.so.2.6.9 (maybe I
cannot do this last) ? What difference does it make ? Should it depend
on the specific library, as I assume it does ? When libfoo moved to
2.6.10, what changed ? (These questions are both about what happens and
about what *should* happen).

There are multiple points of view involved: the developer of the .so,
the application developer, the sysadmin installing .so system-wide
libraries, the application administrator installing .so shared by a
family of applications (this is typically system wide, but maybe a finer
granularity would be better ?), the power user installing libraries
under his home.

If this discussion seems applicable to the LSB I am willing to carry
this on further here, as it is one of my primary interests.

Davide Bolcioni
-- 
#include  // Standard disclaimer applies
-BEGIN GEEK CODE BLOCK-
Version 3.1
GE/IT d+ s:+ a C+++$ UL$ P>++ L++@ E@ W+ N++@ o? K? w O- M+ V?
PS PE@ V+ PGP>+ t++ 5? X R+ tv- b+++ DI? D G e+++ h r y?
--END GEEK CODE BLOCK--


The Future of Linux: 'real' Locale support from X libs or no?

1998-11-12 Thread Christopher Hassell
You can guess what I'll say I suppose?

Glibc is good, but what about wide char, unicode etc.. etc.. etc.. ad biggum.

X is the main site where that is being taken care of (i.e. fonts, keymaps,
input managers for asia etc..).. and that is not now standard in any great 
and good way, very annoyingly.  Locale stuff that Libc *can* handle is looking
better, but X goes to the ends of the earth.  (Heck China defined "UNIX" as its
"standard".. even though some CEOs right after that claimed to rule far more 
than 5% of the earth's users).

We're already adding in libraries and code to handle it in libs (Motif etc..)
but this is mostly broken and we know it.  I'm not the main developer of 
this but I'm watching LSB and we'd love to find a standard we can build 
toward.  Is there any interest in what we have thus far at Xi?

We (I, at least) may even want to develop the widespread 'free' one that 
ought to exist.  China, Japan and other non-West folks should get more than a
hack for their buck... if they want to do word processing and more under X.

Are there any concerns or opinions toward the non-ASCII universii?

-- Christopher Hassell
   Xi Graphics Inc.

(hint to some: code pages work only for vts)


Re: Future of Linux

1998-11-12 Thread Alan Cox
Badlandz wrote:
> Alan Cox wrote:
> I think it is unwize at this point to make LSB conserned with X11R6
> standards.  Of course it should/could comply with what X11R6, but I

libX11.so.* is Xlib is X11, as are the X packages. Other stuff like
themed widget sets sit on X11 (ie another library that you can specify
when it settles down) or replace X11 (in which case its another spec)

> Therefore I think LSB should focus on more basic issues like making FHS
> compliant, sysV vs BSD init standards, and libs.

ISV's ship X11 apps, ISV's need to know X11 will just work.

> Compilers are also an issue I feel strongly about.  I think gcc and egcs
> are awsome, but no match (yet) for commercial compilers.

Funny, I think the reverse, so btw do Sega, 3com, cisco to name a few
people ;)

> therefore if: Linux/hardware allowed Serial number on hardware,
> accessable in OS then: ISV's would LOVE to port to linux because
> there would be no piracy, and they would have a better/secure
> sales expectation.

But for two things. 

1.  We already do support serial info on hardware that has it (eg sun)

2.  Anyone with an hour can 'fix' their serial number on a sun to be
what they like under Linux. The fact sun accidentally published
their algorithm doesnt help too ;)

So its tricky to spec. More productive vendors incorporate "handy"
addons that poll the vendors site weekly when installed and mail the admin
any upgrade info.

In the meantime of course advertising who has which copy ;)

Alan


RE: Future of Linux

1998-11-12 Thread BadlandZ
 --- Begin Message ---
Alan Cox wrote:
> 
> >   What else will the lsb cover? Or has there been a decision about that
> > yet?
> 
> The only other stuff covered at the meeting was X11. The good work XFree
> does is a big help there as their binary interfaces and the X specification
> API's are both stable. Motif has been raised as a question, as has
> OpenGL/MESA.

I think it is unwize at this point to make LSB conserned with X11R6
standards.  Of course it should/could comply with what X11R6, but I
think there is significan merit to the comments by Jim Gettys about
replacing X with an X2 situation where all applications/widgets are
universally reading themes/styles from diffrent libs to allow more
seemless application integeration to a user defined "look" as discribed
in http://editorials.freshmeat.net/jim981031/

Therefore I think LSB should focus on more basic issues like making FHS
compliant, sysV vs BSD init standards, and libs.

Compilers are also an issue I feel strongly about.  I think gcc and egcs
are awsome, but no match (yet) for commercial compilers.

And, if I may, hit on the issue of piracy.  Although a strongly
unpopular idea I know, I have been thinking more and more deeply about
this issue.  Let me paste and old draft of something I am working on
here:

Shit, I can't find it.  Yes, I have drank a lot tonight.  But basically,
it ammounts to this:
Every ISV complains about piracy.
Some major software vendors that are vital (proof provide in draft I
can't find) only port to things like SGI/IRIX where they can use
key/serialnumber to allow one system only installs.
therefore if: Linux/hardware allowed Serial number on hardware,
accessable in OS
then: ISV's would LOVE to port to linux because there would be no
piracy, and they would have a better/secure sales expectation.

It is interesting, InSight is a case in point, desperately needed and
continously purchaced for $2000+ a year by almost every Chemistry
department in the world, but only runs on SGI/IRIX stuff.

Anyhow, that would also give GNU a shot in the arm, if you can't pirate
it, you WANT a free version, so, instant motivation!

> ESR also raised the question of standardising things like
> Python. The suggestion for that was that the python people ought to
> define any such standard and then the lsb issue is purely one of namespace
> ie "lsb-python-..." shall be Python meeting he following criteria, with
> the following options etc.

Not a bad idea, the "lsb-..." for more than python.  The idea being, if
a end user downloads it, they can install it and use it on _ANY_ linux
box (reguardless of OS and Hardware).

> This is so that every app doesnt install their own version of python
> "just in case". That could be extended to all interpreters and some
> libraries probably and a farmed out approach would IMHO be good.

Ooo. to deep for me tonight.  I'll get back to that.
 
-- 
"Robert W. Current" <[EMAIL PROTECTED]> - email
http://www.current.nu- my server (looking for a good
site to host)
"Hey mister, turn it on, turn it up, and turn me loose." - Dwight Yoakam
--- End Message ---


Re: Future of Linux

1998-11-11 Thread Alan Cox
>   What else will the lsb cover? Or has there been a decision about that
> yet?

The only other stuff covered at the meeting was X11. The good work XFree
does is a big help there as their binary interfaces and the X specification
API's are both stable. Motif has been raised as a question, as has
OpenGL/MESA. ESR also raised the question of standardising things like
Python. The suggestion for that was that the python people ought to
define any such standard and then the lsb issue is purely one of namespace
ie "lsb-python-..." shall be Python meeting he following criteria, with
the following options etc. 

This is so that every app doesnt install their own version of python
"just in case". That could be extended to all interpreters and some 
libraries probably and a farmed out approach would IMHO be good.

As to testing and stuff, all I've been watching is HJ Lu's patches and
failure reports for the 2.1.12x kernel


Re: Future of Linux

1998-11-11 Thread Greg S. Hayes
> UDI is irrelevant. The existing UDI semantics cannot express the Linux
> resource management or driver layering. Its also out of the lsb standard
> area completely (indeed conceptually you could probably hack freebsd
> around and produce a LSB compliant freebsd) since we care about services
> at the glibc level.
> 
> Alan

What else will the lsb cover? Or has there been a decision about that
yet?

Greg


Re: Future of Linux

1998-11-11 Thread Hugo van der Kooij
On Wed, 11 Nov 1998, Alan Cox wrote:

> > steps in bridging linux compatibility. What, if any, is the consensus on
> > the FHS 2.0... do the distributions that are part of the lsb agree to
> > use it?
> 
> It was discussed at and shortly after the LI meeting when Bruce presented
> the whole cunning plan. FHS 2.0 is a big help but it might need some
> tightening. Dan Quinlan is conveniently in both the LSB and FHS projects

At present Solaris 2.6 is the best FHS 2.0 compliant OS.

Hugo.

++--+
| Hugo van der Kooij | [EMAIL PROTECTED]   |
| Oranje Nassaustraat 16 | http://www.caiw.nl/~hvdkooij |
| 3155 VJ  Maasland  | (De man met de rode hoed)|
++--+
"Computers let you make more mistakes faster than any other invention in 
  human history, with the possible exception of handguns and tequila."
(Mitch Radcliffe)


Re: Future of Linux

1998-11-11 Thread Alan Cox
> steps in bridging linux compatibility. What, if any, is the consensus on
> the FHS 2.0... do the distributions that are part of the lsb agree to
> use it?

It was discussed at and shortly after the LI meeting when Bruce presented
the whole cunning plan. FHS 2.0 is a big help but it might need some
tightening. Dan Quinlan is conveniently in both the LSB and FHS projects

> Second, I want to address libc. Will glibc be present on all upcoming
> linux distributions? I believe that moving to glibc is an important step
> in securing a posix conformant linux. Judging from the latest release of
> debian, however, I  wonder if there is any progress on moving away from
> libc5...

libc5 is dead, even its maintainers have proclaimed this. I've not seen
any major pressure to spec libc5 at all, even if some vendors choose for
now to ship libc5 based systems with glibc available.

> upcoming UDI drivers? Personally, I feel the UDI is one of the BIGGEST
> steps linux has taken to avoid being shut out of the latest hardware by
> Microsoft. The UDI will, most likely, end the FUD tactic of claiming
> that linux only works with OLD hardware.

UDI is irrelevant. The existing UDI semantics cannot express the Linux
resource management or driver layering. Its also out of the lsb standard
area completely (indeed conceptually you could probably hack freebsd
around and produce a LSB compliant freebsd) since we care about services
at the glibc level.

Alan


Future of Linux

1998-11-11 Thread Greg S. Hayes
I was overjoyed at the appearance of the lsb, but now I am somewhat
dismayed at the lack of discussion on the mailing list... so to anyone
listening LETS START SOME!

First, I believe that the FHS is probably one of the most important
steps in bridging linux compatibility. What, if any, is the consensus on
the FHS 2.0... do the distributions that are part of the lsb agree to
use it?

Second, I want to address libc. Will glibc be present on all upcoming
linux distributions? I believe that moving to glibc is an important step
in securing a posix conformant linux. Judging from the latest release of
debian, however, I  wonder if there is any progress on moving away from
libc5...

What other areas need addressing, and what work is being conducted on
building a lsb  distribution? Also, how do people feel about the
upcoming UDI drivers? Personally, I feel the UDI is one of the BIGGEST
steps linux has taken to avoid being shut out of the latest hardware by
Microsoft. The UDI will, most likely, end the FUD tactic of claiming
that linux only works with OLD hardware.

Greg