zfs diff [PSARC/2010/105 FastTrack timeout 04/05/2010]

2010-03-30 Thread John Plocher
On Tue, Mar 30, 2010 at 1:36 PM, Nicolas Williams
 wrote:
> On Tue, Mar 30, 2010 at 02:04:39PM -0600, Tim Haley wrote:
>> It would be easy enough for me to print a 'time' column as the first


This is getting pretty close to "design by ARC" rather than "review by
ARC";  it might be a better use of ARC bandwidth to take this
discussion offline and place the case in "waiting need spec" mode...

-John (who has ratholed his share of these, and so recognizes the
symptoms easily :-)


More ksh93 builtins [PSARC/2010/095 FastTrack timeout 03/25/2010]

2010-03-29 Thread John Plocher
On Mon, Mar 29, 2010 at 8:42 AM, Nicolas Williams
 wrote:
> If you replace programs delivered by Solaris itself they you've rendered
> your system unsupportable and, indeed, we will not support it.

That may be true of Oracle's commercial Solaris Product, but we are
talking about OpenSolaris here.

The architectural point is that the user/admin needs control of things
like this; with ksh93 builtins, they have that ability (i.e., they can
turn builtins off...) and update binutils packages and the like.
There is no magic here.

   -John


More ksh93 builtins

2010-03-23 Thread John Plocher
On Tue, Mar 23, 2010 at 4:47 PM, Darren Reed  wrote:
>
> Enjoy (the fact that S10 does not have bash or /usr/gnu or ...).


Worse yet (this is no surprise to most, I'm sure, but pity our poor users...):

On your new OpenSolaris system, create a tar archive of, say, your web
document root that you want to move over to another system:  cd
/export/website; tar cf ~/mywebsite.tar .

scp it over to your other OpenSolaris system, you know, the one you
really use, with ksh93 and a real guru's PATH that starts with
/usr/bin :-)  Try to extract your website:  cd /export/mynewsite; tar
xf ~/mywebsite.tar

It probably won't work, because gnutar does not create tar archives
that are compatible with OpenSolaris' tar.

Repeat for the entire set of gratuitously different tools; the
incompatibilities bite both ways, btw, so this isn't really an "I hate
gnu binutils" rant :-)

The missing *ARCHITECTURE* bit is why it is OK to produce a system
with superficially interchangeable parts that don't actually work well
together.  It is all well and good to say that it is "familiar", but
having familiar tools that don't work does nobody any favors.

  -John


PSARC 2010/092 libgdata

2010-03-23 Thread John Plocher
On Tue, Mar 23, 2010 at 9:14 AM, Garrett D'Amore  wrote:
>
> The Volatile binding for the library and lack of any documentation for
> external interfaces are going to severely limit its use in cross
> consolidation project.  Maybe that's not a bad thing.
>

The project homepage says:

libgdata is a GLib-based library for accessing online service APIs using the
GData protocol ? most notably, Google's services. It provides APIs to access
the common Google services, and has full asynchronous support.

The dependency on GLib says to me that this is a GUI focused API set that
will find few non-GUI uses, especially since there are API libraries
available from Google for Java, JavaScript, PHP, Python and Objective-C...

Google defines the protocol: http://code.google.com/apis/gdata/

What is the Google Data Protocol?

The Google Data Protocol is a REST-inspired technology for reading, writing,
and modifying information on the web.

Many services at Google provide external access to data and functionality
through APIs that utilize the Google Data Protocol. The protocol currently
supports two primary modes of access:

   * AtomPub: Information is sent as a collection of Atom items, using the
standard Atom syndication format to represent data and HTTP to handle
communication. The Google Data Protocol extends AtomPub for processing
queries, authentication, and batch requests.
   * JSON: Information is sent as JSON objects that mirror the Atom
representation.

The Google Data Protocol provides a secure means for external developers to
write new applications that let end users access and update the data stored
by many Google products. External developers can use the Google Data
Protocol directly, or they can use any of the supported programming
languages provided by the client libraries.

This says to me that the APIs are at least Uncommitted and probably
Committed.

In a perfect world, we would have a project that added all these APIs to
OpenSolaris as a coordinated packaging exercise.

  -John


More ksh93 builtins

2010-03-22 Thread John Plocher
On Mon, Mar 22, 2010 at 9:53 PM, Neal Pollack  wrote:
> If "welcome here" is PSARC

No, it is the OpenSolaris ARC community alias, which PSARC-ext feeds into.

I don't really care what goes on inside the proprietary PSARC at sun.com
as it reviews closed cases - as long as the results of those closed
cases don't get pushed out into the OpenSolaris community's source
repositories.

The metric is pretty clear - for stuff to go into the community's open
repos, the ARC reviews for that stuff must also be open.

Otherwise, I'm sure there are a host of changes that Joerg (the
defacto  Schillix ARC chair), Moinak (...Belenix ARC Chair), and the
rest of the *other* distro leaders would *LOVE* to put back into ON
and friends...

What's good for the goose (Oracle's OpenSolaris distro) has to be good
for the gander (the other distros) as well...

  -John


More ksh93 builtins

2010-03-22 Thread John Plocher
On Mon, Mar 22, 2010 at 7:21 PM, Garrett D'Amore  wrote:
> Again, I can't talk about the rationale for the bits that are closed being
> so.


It is pretty clear that the reason it is closed is because Oracle
feels the features, internal build coordination and configuration of
their own distro (OpenSolaris 2010.03 ...) are none of the community's
damn business.

Fine - I can live with the refreshingly honest view that the
OpenSolaris distro is Oracle's proprietary distro built on top of the
community's effort.  The implication of this behavior, though, is that
any "Oracle distro specific" changes, policies and customization must
remain part of Oracle's internal source trees and NOT be pushed back
out to pollute the community's repos.  If it ain't the community's
business now, the community really doesn't want the results shoved
down their throats.

This is no different in concept from EON, Nexenta, Belenix, OSUnix,
Shillix, Milax (etc...) having to keep *their* distro-specific bits
out of ON and the other OpenSolaris community repos.  Consider it
OpenSource Hygiene - proprietary behaviors (and their results) are not
welcome here.

  -John


discussing replacing pax by open source (was sed)

2010-03-11 Thread John Plocher
>> > > > It first requires a sign of will from people inside Sun/Oracle as we 
>> > > > the

>> Because my experience has been that if you sit down and write the code
>> (including the bits to get it to build and package properly in ON) and
>> the ARC case, participate fully in the code review and work it through
>> to the end then you can get code in.


It is pretty clear that Sun (ahem, Oracle) doesn't care about the
world of OpenSolaris utilities - after all, they laid off the entire
team that was supporting the code.  As such, it comes as no surprise
to me that nobody else is interested in star and other community
driven utilities.  Face it, some code (a change internal to ON...) is
easy to find a sponsor for, while for others (a new KDE desktop, for
example) it is blatantly impossible.

It is also pretty clear that Oracle will do whatever Oracle wants with
Solaris - they have never implied otherwise.

For better or worse, the message coming from Oracle nowadays is that
they are happy if we want to help them work on stuff that they are
also working on, but that they have minimal interest in anything else.

I believe most of these tensions stem from the fact that the codebase
(ON and the other consolidations) is tightly tied to the proprietary
OpenSolaris Distro in ways that don't equally apply to Belinix,
OSUnix, Schillix, EON, Nexenta and the rest, so it is hard
(impossible?) for Oracle to distinguish between "good for their
distro" and "good for the community".  Of course, whenever there is
any confusion, "good for Oracle"  trumps the community.

  -John


Timer restat for 2010/067 Interim modernization updates

2010-03-02 Thread John Plocher
On Tue, Mar 2, 2010 at 1:43 PM, Sebastien Roy  wrote:
> I don't believe that the materials as currently written can be made
> available.


A point could be made that this case (and the associated project's
deliverables) are not appropriate for inclusion in OPENsolaris, and
should (along with any and all close cases/deliverables) be kept in a
private-to-your-distro repository.  If the case can not be discussed
in public, then the resulting code shouldn't be foisted on the
community either.  After all, if this behind the scenes stuff is OK
for Oracle, why not also Nexenta, Schillix and Belinix (to name just a
few...)?  Proprietary development practices like this don't cut it in
an open community...

   -John


increase number of realtime signals [PSARC/2010/062 Self Review]

2010-02-22 Thread John Plocher
Given Roger's comment that 64 and beyone "breaks binary compatibility"
and should only be done on a major release boundry, isn't *this* the
exact right time to do so?  The Solaris10 to OpenSolaris Enterprise
change IMO *is* such a major release point.  There won't be such an
opportunity again for decades...

   -John


On Mon, Feb 22, 2010 at 11:37 AM, Garrett D'Amore  wrote:
> On 02/22/10 11:28 AM, Roger A. Faulkner wrote:
>>
>> I am sponsoring this automatic case for myself.
>>
>
> +1 on the case, on the justification for not expanding to 64.
>
> IMO, this pushes the boundary of what's permissible in a self-review, but I
> see no reason to promote it to a full fast track at this point.
>
> ? ?-- Garrett
>
>> The number of realtime signals supported by Solaris is quite small (8).
>> This is the minimum number required for Posix branding.
>>
>> However, other systems provide many more.
>> Linux supports 32-64 realtime signals depending on the architecture,
>> BSD does 32 or 64 depending on architecture, AIX supports 111.
>>
>> This affects Solaris directly in that the Linux zone
>> provided by Solaris cannot support Linux applications
>> that use more than 8 realtime signals. ?See the bug report:
>> ? ? 6820733 lack of realtime signals causes Linux application
>> ? ? ? ? ? ? in BrandZ to fail
>> which is a duplicate of the more general bug report:
>> ? ? 6820737 Solaris needs to increase the number of realtime signals
>> ? ? ? ? ? ? for platform parity
>>
>> This case proposes to increase the number of realtime signals
>> supported by Solaris from 8 to 32.
>>
>> Why not just go to 64, one might ask?
>> The reason is contained in the 6820737 bug report's Evaluation:
>>
>> ? ? Now, as to the request to increase the number of real-time signals
>> ? ? to 64, this would more than double the currently supported number
>> ? ? of signals. ?The sigset_t structure, in its present definition,
>> ? ? can only support a maximum of 128 signals. ?It's a limited resource.
>> ? ? Increasing the number of real-time signals to 64 would leave only
>> ? ? 24 bits remaining in the sigset_t definition for future expansion.
>> ? ? We need more wiggle-room than that for future expansion.
>>
>> ? ? So, when the number of real-time signals is increased, it will only
>> ? ? be increased to 32, not 64. ?We can increase to 64 only by changing
>> ? ? the definition of sigset_t and this breaks binary compatibility,
>> ? ? so this can be done only when we move from Solaris 2.x to Solaris 3.x
>> ? ? (or whatever the next naming scheme will be called).
>>
>> Roger Faulkner
>>
>>
>
> ___
> opensolaris-arc mailing list
> opensolaris-arc at opensolaris.org
>


Basic Network Privilege [PSARC/2009/685 FastTrack timeout 01/01/2010]

2009-12-23 Thread John Plocher
> Just various non-obvious functions in libc(). ? (Do you think most programmers
> realize wordexp(), pututxline() or grantpt() call fork+exec?)

This is a reasonable characterization of what happens if you lose the
fork and exec privs - a few things break, some of which are obvious
(i.e., fork() no longer works) and some less so.  Somewhere there is a
list of things in the system that fail if you don't have those privs
AND there is nothing on that list that causes angst.

Is there a similar list of OpenSolaris-provided lib routines that will
fail if you don't have network privs?  Is there anything on that list
that comes as a surprise?  Without a list (which doesn't need to be
exhaustive, just typical), how can we evaluate the usefulness/impact
of this priv? At an extreme, if  lose_priv("networking") is
effectively equivalent to halt() because nothing in the system works
without it, then I'd question the usefulness of this priv.  I don't
believe things are that ridiculously extreme, but the discussions
about loopback and IF_UNIX make me wonder what the real, effective
impact is.  What system lib routines will now fail unexpectedly
without network privs in the same way that wordexp() fails without
fork()/exec() privs?

The bottom line, to me, is:

If I need to disable networking privs in my app, but doing so
disables other
OpenSolaris things that I can't live without as a side effect,
then the networking
priv isn't as useful as it could be.

  -John


Basic Network Privilege [PSARC/2009/685 FastTrack timeout 01/01/2010]

2009-12-23 Thread John Plocher
What is the basic use case for this priv?  Is it to let the admin
"sandbox" somebody away from the network for security reasons, or is
it a simple debugging tool to force-fail programs that use any form of
networking? If the former,  if it also disables key parts of the
system that happen to use IPC in their implementation, it won't
actually be useful; if the latter,  have you characterized what parts
of the system are disabled by it?  Will a JVM even run?  What about a
graphical desktop?  Is there anything that can be usefully done on the
system if this priv is not available?

Maybe there needs to be both a "Local IPC Priv" for loopback usage and
a "Network Priv" for all others...


On Wed, Dec 23, 2009 at 2:34 PM, Alan Coopersmith
 wrote:
> How would this be any different than if they tried removing other basic
> privileges, like the ability to fork() or exec(), from apps that really
> needed it?   If customers break their system, it's broken.


I think the difference is that for those, the set of system middleware
we provide doesn't silently rely on them for proper operation;
loopback IPC isn't something (like exec()) that is an obvious side
effect or implementation detail in a library...

  -John


PSARC/2009/688 Human readable and extensible ld mapfile syntax

2009-12-23 Thread John Plocher
On a related note, the upcoming S10->OpenSolaris Enterprise transition
is *the* time for such a change.  If you miss this train, you will not
have the opportunity to easily do so for quite a while.

While I agree that "mapfile version = 2 means nonexec stacks" is a
poorly overloaded semantic for such a change, I urge you to come up
with a usable mechanism and deploy it in this release.

 Ali Bahrami wrote:
>> I think this stack protection issue is better solved as part of the
>> solution to
>>
>> ? ? 6239804 make it easier for ld(1) to do what's best
>>
>> which is something we've been thinking about independently of
>> mapfiles (and of course, something that is not part of this case).


James Carlson wrote:
> However, when that solution arrives, won't the implication be that
> non-executable stacks become the default way of doing things?
>
> The question then becomes: what are the steps along that path?

  -John


GCC4: The GNU Compiler Collection 4.X [LSARC/2009/575 FastTrack timeout 10/28/2009]

2009-10-26 Thread John Plocher
On Mon, Oct 26, 2009 at 10:30 AM, George Vasick  
wrote:

> OK, let me make sure I understand correctly.
>
> In OpensSolaris 2009.06, we should have released gcc43 as opposed to gcc432.

My understanding is that that is the norm for the extended the GCC
community - that the "trailing .2" is simply a serial number that
indicates the patch level within a stable release version.  If so,
then we erred in exposing the 3-digit versioning scheme...

> ?The user could use something like pkginfo or gcc -v to determine that it
> was actually version 4.3.2.

More to the point, the package repo could have all of the following
package versions:
gcc 4.3 patchlevel 2 - used in the ON build environment
gcc 4.3 patchlevel 3 - released Oct 31, 2009
gcc 4.3 patchlevel 4 - released Nov 1, 2009 :-)
and maybe even a "virtual" package that points to one of the above:
gcc 4.3 - latest release

Hopefully, this would let you "subscribe" to the virtual package and
get auto-updates as the IPS repo gets updated; if you instead
installed a specific version.patchlevel, you would not get auto
updated to new version.patchlevels...

> There would be no coexistence of
> gcc432 and gcc433. ?However, when we release gcc44, coexistence should be
> allowed.

That seems reasonable to me.  If you need 4.3.2, install it - and only
it.  Or, use the "install in my home directory" options of ips to
install a particular version just for yourself or your build
environment...

> Then there is a separate issue of the default commands gcc, g++, c++, and
> gfortran in /usr/bin. ?Should they remain at 343 as in 2009.06 or should
> they be bumped up the to latest version of gcc?

The links should be to 4.3 - independent of any patch level - since
(as above) you would not be setting things up to handle multiple
instantiations at the patch level...

> The argument for leaving
> them at 343 is that 343 is the build compiler for OpenSolaris source builds.

As long as one can install 4.3.2 (or is it 343 - I'm thoroughly
confused now, especially since I routinely fat-finger the version
sequences myself :-) from IPS, that should be sufficient.  Bending
over backwards to support co-installs at the patch level hardly seems
necessary...

  -John


GCC4: The GNU Compiler Collection 4.X [LSARC/2009/575 FastTrack timeout 10/28/2009]

2009-10-25 Thread John Plocher
On Fri, Oct 23, 2009 at 6:29 PM, George Vasick  wrote:
> In the previous case for 4.3.2, we had proposed adding "plain" links in
> /usr/bin to the default version of GCC, e.g. /usr/bin/gcc -> gcc-4.3.2.
> ?According to the gcc man page, plain gcc should invoke the last version
> installed.

I'd like to second Rainer's comments that the user should never really
see the bugfix level of things (i.e., that from their perspective,
4.3.2 and 4.3.3 are really the same "version 4.3 compiler").Such
bugfix-level-versioning *is* appropriate in the packaging metadata,
where one could note that *this* gcc package set is for "4.3.2" and
*that* package set (with the same package names...) is for "4.3.3".
This is common for the rest of the OS - we don't expose the hg patch
level for the "ls" command, for example.

  -John


GCC4: The GNU Compiler Collection 4.X [LSARC/2009/575 FastTrack timeout 10/28/2009]

2009-10-22 Thread John Plocher
Didn't we have this very same coexistence conversation the first time
'round? :-)

The use case of the user installing gcc 4.3.2 "yesterday", installing
gcc 4.3.3 "today", and then uninstalling gcc 4.3.2 "tomorrow" is going
to be a reasonably common upgrade path; whatever causes the "havoc to
both the files and IPS database" is a bug and must be fixed.   Are we
sure this is only a GCC project bug, or is there a related IPS problem
(user provided packages that cause havoc with the IPS system is a
recipe for disaster)?

It sounds like there are two related, but independent "projects" here:

1) Fix/update the already deployed gcc 4.3.2 packages in the IPS
repo to allow the use
   case above, and

2) Release a new set of gcc 4.3.3 packages.  Assuming, of course,
that these
   packages won't have the upgrade bug that slipped past in
4.3.2's original release.

Can you point out where the topic of version flexibility, both in the
"co-installed" and "which is the default" areas, is addressed in the
project's packaging design and/or packaging architecture specs?  In
particular, I am looking for the architecture (or design pattern...)
that you are following for choosing which compiler version is invoked
via "/usr/bin/gcc" -  Is it "last package installed", "first...", some
special per-version "make me the default" package, or something else?

   -John


On Thu, Oct 22, 2009 at 4:36 PM, George Vasick  wrote:
> Alan Coopersmith wrote:
>>
>> George Vasick wrote:
>>>
>>> We released 4.3.2 in OpenSolaris 2009.06. ?We have to update 4.3.2 in
>>> order to release 4.3.3 to avoid duplicate pathnames between the packages.
>>
>> The case specified 4.3.2 as a new delivery, not something already
>> provided.
>
> Sorry about that. ?Here is a new section 4.10 clarifying the situation:
>
> ? ?4.10. Packaging & Delivery:
> ? ? ? ?Package ? ? ? ? ? ? ? ? Status
> ? ? ? ?=== ? ? ? ? ? ? ? ? ==
> ? ? ? ?SUNWgcc432 ? ? ? ? ? ? ?Modified
> ? ? ? ?SUNWgcc433 ? ? ? ? ? ? ?New
> ? ? ? ?SUNWgccdoc ? ? ? ? ? ? ?New
> ? ? ? ?SUNWgcclibgcc1 ? ? ? ? ?New
> ? ? ? ?SUNWgcclibgfortran3 ? ? New
> ? ? ? ?SUNWgcclibgomp1 ? ? ? ? New
> ? ? ? ?SUNWgcclibobjc2 ? ? ? ? New
> ? ? ? ?SUNWgcclibssp0 ? ? ? ? ?New
> ? ? ? ?SUNWgcclibstdc6 ? ? ? ? New
> ? ? ? ?SUNWgccruntime432 ? ? ? Deleted
>
>> Still, it seems wasteful to ship both, instead of replacing 4.3.2 with
>> 4.3.3
>> and telling developers who want to stay on 4.3.2 to not upgrade their
>> packages,
>> but that probably depends on IPS/OpenSolaris packaging changes to allow
>> those
>> to be upgraded separately from the WOS build.
>
> According to my to my IPS contact, installing the new 433 packages over an
> existing 432 install would go through just fine with no warnings or errors.
> ?Uninstall is another story with all kinds of havoc to both the files and
> IPS database. ?There were two choices. ?We could update the 432 packages to
> be empty or modify their contents to allow coexistence. ?In either case, the
> 432 packages require an update.


CUPS as the default print service [PSARC/2009/514 FastTrack timeout 10/02/2009]

2009-09-30 Thread John Plocher
On Wed, Sep 30, 2009 at 11:29 AM, Sebastien Roy  
wrote:
> ?I'd rather see the printers NIS map Obsoleted

+1

> Anyway, not this case.

+1 !!!

  -John


Where is the meta-architecture to support FOSS?

2009-09-28 Thread John Plocher
James Carlson  wrote:
> .. ?Thus "FOSS is special."

I believe FOSS *IS* Special - because doing a good job of integrating
general cross-platform FOSS into OpenSolaris is actually HARDER than
integrating something invented by the community specifically for the
OS itself.

The substantial case history for using the taxonomies does a good job
of helping the latter projects - but it leaves the former hanging -
their architectural requirements include things that core OS projects
don't care about: "consistency across multiple different OS and distro
platforms",  "backwards compatibility is only important for a small
number (maybe even one) of previous releases", and, in many cases,
"need to be able to have multiple versions installed on the same
system" (nee installations in multiple different $HOME directories).

> Why does their work need to be reviewed at all?

I think you are running the wrong way down the playing field :-)

Not only do I believe that ARC review "good" and "important", but I
believe that it could be better and easier if there was some well
thought out guidance to these FOSS projects that helped them meet
their additional requirements.

Right now, we are pushing all this FOSS thru the "architecture for
things embedded into a single OS instance" grinder, and the sausage
that comes out is a bit hard to chew. I'm simply suggesting we should
provide another set of guidance (for the ARCs and the Projects) that
would improve things for everyone.

The people on the ARCs are the ones that have a good idea of the big
picture; they need to take the lead (as in become Leaders and Core
Contributors in the OpenSolaris Community) and help lead the rest of
us (Participants and Contributors) towards a viable set of solutions.
They have the case history to start the conversation, they have the
use cases such policies and best practices can test themselves
against, and they have the ability to revise and shape the review
process itself so that it can try out these new things.

If they don't take the lead in these things, nobody will, and the end
result will be that slippery slope of "external" software doesn't need
to be reviewed  when it integrates into OpenSolaris, then ... _no_
software needs the sort of review the ARC provides".

  -John


   -John


Where is the meta-architecture to support FOSS?

2009-09-25 Thread John Plocher
James Carlson wrote:
> those who are claiming that all the software world outside
> of Sun is Volatile by mere dint of not having an SMI paycheck
> are in fact *WRONG*

I absolutely agree.

But I think there is another meme flowing thru this conversation:

> If it really is the case that the upstream is known to make
> unpredictable and even baldly stupid moves, then ...
> But when the upstream really is sensible, and doesn't
> deliberately break their own software

I think you missed my point - and provide an example of what I meant when I said

> We used to think that incompatible changes to
> interfaces found in/on Solaris were always Bad,
> and that evolutionary stability was always a
> Good Thing.

Yes, stability is important, but it is not the only thing that
matters.  Under your words, I hear "everything will be OK if we can
achieve interface stability; if we can't, all we can do is punt" (and,
yes, I am taking your statements to an unwarranted extreme :-)

I think we need to admit up front that the traditional Sun Interface
Taxonomy simply does not work for most "pass it on" FOSS projects -
the only pragmatic answer to "what does this project team promise to
commit to" is "nothing", because no matter what the past track record
is from the upstream provider, they have no choice but to follow their
lead going forward.
The traditional Interface Taxonomy assumes that we have the option of
not passing on (or re-engineering or deferring or ...) incompatible
changes so we can live up to our commitments of stability.  In today's
environment, we don't have that option, and furthermore, nobody but us
expects us to use those options in the first place.   If GDB does
something we ourselves wouldn't have done, we still have to ship the
new version on to our users - or those same users will go elsewhere to
get it.  They (and we) don't have the option of doing without it, and
we don't have the resources to re-engineer it.

So let's not go there.  Step back and take a big breath.  What do
*we*, the Operating System / OpenSolaris Distro Developer need from
these FOSS projects?  IM(ns)HO, we need:

1)  If we are building a dependency on it from within our system, we
need a way to install and find a version that works with all the other
core pieces co-located within the system and that won't be changed if
someone installs additional versions.

2) If we want others to be able to use this component, we it need it
to be discoverable by them and usable when/where it is found.

3) We must allow for other versions of the thing to be installed, in
additional locations if need be, even if those versions are
incompatible with our stacks or other components.  However it is done,
installation of another version of something must not negatively
impact existing installed silos.

To me, this sounds a lot like the JDK/JRE work that jek3 did - and the
work that was going on in WSARC around architecting multiple coherent
silos, with symlinks from /usr/bin and application ld.so linkage
dependencies into one specific silo along with the ability to create
and install alternative (potentially sparse) silos.

Rather than having an architecture that only allows for a single
global silo of "plays well with others" apps and libs, this moves us
towards a world of multiple silos.  Each silo would have an assumption
of stability within the stack, and no promises outside itself. - sort
of like multiple co-instralled WOS's...

This isn't easy - but it isn't impossible, either, given some
leadership and a desire to make something that works.  Rather than
dumping a boil-the-ocean requirement on each and every FOSS project to
acomplish by themselves, mostly in the dark, and without any role
models, maybe we could nail down some architectural principles and
offer some best practices...   Please?

  -John


Where is the meta-architecture to support FOSS?

2009-09-25 Thread John Plocher
>> but we at Sun actually have no control over these interfaces.
> Sun control is not the point here

A big difference between Solaris {2.0, 2.1, ... 2.8, 2.9, 2.10} and
OpenSolaris is the belated acknowledgment that not all problems are
best solved by freezing APIs in stone.  Unfortunately, that
understanding has not yet found its way into ARC best practices and/or
policies, so it is no wonder that FOSS projects like this are left
somewhat clueless.

We used to think that incompatible changes to interfaces found in/on
Solaris were always Bad, and that evolutionary stability was always a
Good Thing.  What we found was that, while stability was desirable,
being different/old/stale from what was available elsewhere was even
worse.

The key in my mind is "available elsewhere" - as on Linux distros,
sourceforge downloads, etc.  I don't think anyone is arguing that we
should relax our stability expectations for core OS things that make
it possible to do distributed development on drivers, OS modules and
subsystems - those things are somewhat native to our system.

But.

Things like Gnu tools, desktops, middleware and the like are another
matter - we live in a heterogeneous world where platform differences
cause severe developer and end-user problems.  The users of these
programs/libraries are already well aware of the evolutionary
stability -vs- perceived value tradeoffs, and resent efforts on our
part to arbitrarily or artificially manipulate their options.

In my mind, ANY FOSS project that doesn't support the concurrent
installation of multiple versions on a system is fundamentally broken.
 Of course, there should be an architectural framework for them to do
this, and that framework should support the concepts of "default
version" as well as "version used by the OS" because those versions
may or may not be the same at any given point in time.

There were a few cases over in WSARC-land a few years ago about
architecturally coherent silos of middleware, but I don't think there
has been any substantive activity since then.

Where is the ARC leadership in this area?

   -John


Question concerning ARC agendas and minutes

2009-09-17 Thread John Plocher
The announcements function is being removed from the new OS.o web app
functionality, so it is probably shortsighted to build a process that
relies upon its continued existence.

I'd suggest an auto-mirroring of the agenda to a website page...

  -John



On Thu, Sep 17, 2009 at 1:19 PM, Asa Romberger  wrote:
> Question to the community ARC members,
>
> Currently, I both email the ARC agendas to opensolaris-arc at opensolaris.org
> and post them as announcements at
> http://www.opensolaris.org/os/community/arc/announcements. In addition, I
> have been updating the announcements shortly before the meeting to reflect
> the current list of fast-track cases. I only mail the ARC minutes. This is
> both extra work and can result in inconsistencies between the two agendas.
>
> My proposal is to eliminate the email entirely and post both the ARC agendas
> and the ARC minutes at
> http://www.opensolaris.org/os/community/arc/announcements.
>
> If you have a major objection to this, please let me know. Otherwise I will
> start this practice next week.
>
> Thanks
>
> Asa Romberger
> ARC Coordinator
>
>
> ___
> opensolaris-arc mailing list
> opensolaris-arc at opensolaris.org
>


Consolidations (Re: [xwin-discuss] Obsolescence of /usr/X11)

2009-09-10 Thread John Plocher
> And as for the limitations you mentioned: Thanks for this detailed summary.
> Although: Different needs could also be addressed by a single system

In developing a distributed development system, one could use a
one-size-fits-all approach, or one could step back and define the high
level requirements that such a build system needs to adhere to and let
each development organization come up with their own locally optimized
solution.

Sun intentionally did the latter.  Specifically, the build process
that takes source code and generates binary packages is completely
delegated to the individual consolidation teams.  The C-Teams are
expected to manage the evolution of its source code (via the ARC
process) and do whatever it takes to build and deliver binary
packages.

This means that the build system and the source code control system
and the developer coordination system and the compilers and the
editors and IDEs and ... can *all* evolve independently, and nobody is
stuck behind an obsolete or ineffective system imposed from the
outside.

>From an "architecture of the development process" perspective, this is
a very good thing.
Along with the "policies" that all build systems deliver packages and
that changes to the public interfaces exposed by those packages be
coordinated thru the ARC process, there are several "best practices"
that can be used by consolidations as they invent/evolve their own
development processes.  Any consolidation is free to come up with new
and improved best practices in this area, but none are required to use
any particular one.

   -John


Updating the IAM file for opinions

2009-09-09 Thread John Plocher
On Wed, Sep 9, 2009 at 11:37 AM, Darren Reed  wrote:

> Reviewing the file "status.allowed" for guidance on how to update
>


Most of this is already covered in the Member Handbook on sac.sfbay, in the
section about opinion duties.

The status.allowed is a memory reference, and not intended to be the only
doc one needs.

With that said, I don't see any problem with adding more comments/examples
to it.

   -John


Where is the ARChitecture? (Re: FOSS to /contrib -- a bikeshed paint argument)

2009-09-01 Thread John Plocher
On Tue, Sep 1, 2009 at 11:48 AM, Nicolas
Williams wrote:
>> > - The ARC should stay out of design issues (in this case the design
>> > ? issue of using Tcl to configure shell environment modules);

>> Why? ?If the design imposes upon the consumers, then ARC should be
>
> Because design review is not normally in scope for the ARC.
>
>> concerned. ?As a specific case here, Tcl imposes two things on consumers:

The systems architecture of *Solaris is impacted by this choice - if
nowhere else, in the areas of minimalization and infrastructure
support costs.  In the past, the systems architecture of Solaris has
tried to minimize the number of arbitrary scripting environments that
were required to be present in the "core system" for just these
reasons - the more scripting languages required by the system itself,
the bigger, costlier and harder to evolve it becomes.

The ARChitecture in this case is one of "yet another leaf-ish thing is
being sucked into the core, and needs to be managed accordingly".

   -John


Where is the ARChitecture? (Re: FOSS to /contrib -- a bikeshed paint argument)

2009-09-01 Thread John Plocher
[jumping in late...]

On Mon, Aug 24, 2009 at 10:15 AM, Nicolas
Williams wrote:
> ? ? Banishing would-be Volatile stuff (e.g., because we have no i-team
> ? ? committed to supporting it) to /contrib won't absolve us when the
> ? ? upstream community breaks compat and we blindly ship the broken
> ? ? version.

I believe you might be making an invalid assumption - that
incompatible changes in a component are somehow automatically and
always an indication of brokenness.

Much of the historic ARC focus has indeed been on how to evolve things
in compatible ways so that dependency stacks don't get unexpectedly
broken.  This view is a natural one if one looks at the OS+apps as a
self-contained ecosystem, which is exactly what Solaris and its
bundled packages was - before FOSS.

Nowadays, the ecosystem is much larger.  It consists of windows,
linux, *bsd and *solaris systems, all trying to be good platforms for
running FOSS projects that are being developed elsewhere.  In this
brave new world, when a 3rd party FOSS component swerves wildly to the
left, it is the conservative platforms who resist that change that are
seen as buggy - or at least not being on top of things.  When your IT
department has a mix of platforms and users who want the latest FOSS
whatchamacallit, compatibility takes on a new meaning:  it is no
longer sufficient to be the same as yesterday, it now needs to be the
same as the upstream source that is being deployed on all the other
platforms.

The job of the ARC is to balance these conflicting requirements in a
way that does the least violence to the players - middleware and
platform developers need predictable foundations, our customers/users
need the ability to manage the migration between versions at their own
speed, and early adopters need to be able to get and use the bleeding
edge latest stuff.  The question isn't "should we provide a new
version?", but rather "how can we provide a *set* of versions that can
be used to meet the various needs articulated above?".

The easy case is when a component evolves in only compatible ways.
We've got that idiom nailed.  Now we need to expand our abilities to
encompass the not-so-easy cases.

   -John


changing stability levels

2009-08-17 Thread John Plocher
On Mon, Aug 17, 2009 at 8:12 PM, Garrett D'Amore wrote:
>> Erm... does this even apply to "Private" interfaces (the question was
>> for a switch from "Project Private" to "Consolidation Private") ?

> Yes. You're expanding the scope,

Technically, no you don't need ARC interactions for a Private =>
Consolidation Private change.  ARChitecture at the PSARC level is
predominantly about public interfaces - or private ones that cross
consolidation boundaries.

Practically speaking, yes you should.  Consolidation Private means
that the C-Team needs to track this stuff, and the way they do that is
to leverage the ARC documentation archives.  After all, why reinvent
the wheel?

Thus, you all are right.  A quick ARC fasttrack is the preferred
low-effort/low-impact mechanism to accomplish your desired result.
Not a full case, and not a thing that on the surface should be
expected to be derailed...

  -John



GnuPG and friends

2009-07-24 Thread John Plocher
Yes, you can attach an opinion regardless.  It has been done before;
just write it, get a grunt of approval at the next PSARC meeting and
change the IAM file to reflect the final state (closed approved
fasttrack changes to closed approved , IIRC)

  -John



On Fri, Jul 24, 2009 at 11:19 AM, Mark Martin wrote:
> Don Cragun wrote:
>>
>> Before the vote was taken, it was obvious to me that doing this is not
>> part of this case. ?I was considering asking that the case be derailed
>> just to forward a note up the chain to ask that a project to do this be
>> funded. ?But, given the current climate at Sun, I didn't think it would
>> be worth the effort.
>>
>
> Is it too little too late to ask that it be derailed just long enough for an
> opinion to be written? ?I have almost the same concerns you have, Don, and
> they apply in a more general sense, as I suspect they do for you. ?Can we
> just attach an opinion regardless? ?Is the mail record enough?
>
> ___
> opensolaris-arc mailing list
> opensolaris-arc at opensolaris.org
>



FOSS Library Availability was Re: GnuPG and friends ...

2009-07-24 Thread John Plocher
>>> But surely there are limits.  Having the GNU Pth library on the system
>>> for other apps to link with is bad.

Reality check time :-)

As Don said, porting GNU Pth to be "native" on OpenSolaris looks to be
a very easy thing to do...

If the status quo of "bloated Pth on OS" really bothers you, stop
writing email about how bad it is and go write some code instead.  Go
port Pth to OS native threads - it sounds like an easy bite-sized
project :-)

  -John



SATA Framework Port Multiplier Support [PSARC/2009/394 Self Review]

2009-07-15 Thread John Plocher
On Wed, Jul 15, 2009 at 11:02 AM, Alan Perry wrote:
> sponsored cases with more
> significant changes where PSARC members said "why is this a fast-track and
> not a self-review".

The key point isn't "significant changes", but rather "what is the
existing stability level of the things being changed?" - we do ARC
stuff so we can manage the impact and repercussions of the stuff we
change.

Adding things is easy, as is changing things in compatible ways.  Even
incompatible changes are easy, as long as they are to things that have
low longevity/stability expectations.
As you move up the stability levels with incompatible changes, the
need for review naturally increases, because the side effects of such
changes impacts more and more projects/teams.

So maybe the best litmus test is one that captures the difficulty of
managing the change once it gets out into the world:  "who might be
negatively impacted by my change?" - with "none but me" equating to
self review, "family and friends, but we can easily deal with it" to
"fast track" and "people I don't know well" to "full review".

  -John



SATA Framework Port Multiplier Support [PSARC/2009/394 Self Review]

2009-07-15 Thread John Plocher
On Wed, Jul 15, 2009 at 10:17 AM, Alan Perry wrote:
> However, I am concerned about inconsistent application of the documented
> process.

In the past, my decision tree looked like this:

Proposed stability level for new interfaces:

{Project Private, Not an Interface} ) => Self Review
{Consolidation Private}  => Fast Track
{Sun Private} => Deny with extreme prejudice :-)

else

=> Fasttrack or full case, as circumstances require, based on
 whether incompatible changes are being made to interfaces
 that have existing expectations of longevity...


In this case, adding CP interfaces means that there is a need to
record and communicate within the consolidation, so some sort of
ARC-archived interaction is appropriate.  Whether that is a recorded
self-review, fasttrack or full case depends on what else is being done
by the team - with a full case being exceptional and probably
undesirable...

  -John



[website-discuss] The fugitive ARC caselog

2009-06-11 Thread John Plocher
The "old" caselog folded together all cases from all ARCs; for some
reason, when AlanB rewrote things, he put back the per-ARC distinction
that was not really desirable "out here" in the community.  If you can
map from

 http://www.opensolaris.org/os/community/arc/caselog//NNN

into the new

http://arc.opensolaris.org/caselog///NNN/

then things will be somewhat better.

The other incompatible changes are that Alan didn't keep the code that
generated per-case index pages with summary data about the case or
keep the name mapping conventions (if html, txt, pdf and ms versions
of the opinion were all there, just present the html version, encode
pdf files as attachments, mangle the names to match the webapp's
requirements for page names...)

I believe that it is this latter that Mark is referring to.

  -John



EOF UCB Device Names [PSARC/2009/346 timeout 06/16/2009]

2009-06-10 Thread John Plocher
On Tue, Jun 9, 2009 at 10:29 PM, Garrett D'Amore  wrote:
> However, I have some reservations about this change for Solaris 10.

I understood Jerry's comment...

Jerry Gilliam wrote:
> For the minor (major?) release only, I think it would be reasonable to
> modify iostat(1M) to -n behavior by default, effectively obsoleting
> -n and the  form of device name reporting.

... to say that the iostat and vmstat and friends would not change
their default in Solaris10 (which would be the "patch binding" part of
the effort), but that they could/would in OpenSolaris/Solaris.next
(the minor/major? binding).

  -John



EOF UCB Device Names [PSARC/2009/346 timeout 06/16/2009]

2009-06-09 Thread John Plocher
On Tue, Jun 9, 2009 at 5:54 PM, Jerry Gilliam  wrote:
> For the minor (major?) release only, I think it would be reasonable to
> modify iostat(1M) to -n behavior by default, effectively obsoleting
> -n and the  form of device name reporting. ?We would
> need to retain and ignore -n for compatibility I assume.

This sounds good.  All we need now is to update the log entries that
were mentioned...

  -John



EOF UCB Device Names [PSARC/2009/346 timeout 06/16/2009]

2009-06-09 Thread John Plocher
+--- from man page--
|  EXAMPLES
| ...
|   Example 2: Enabling the service
|
|   # svcadm enable system/ucb-device-names
|
|Example 2: Enabling the service
|
|   # svcadm disable system/ucb-device-names
+--

I assume that the duplicated "Example 2" heading is a typo, and that
"disable" removes the links created by "enable" (the proposal never
explicitly says it does...)


+-- from the proposal ---
|4.12. Dependencies:
|Some tools such as iostat generate device names using the
|  form,
| ...
|   The same  names are logged along with device errors
|   in  the system log
+--

For completeness, it would be good if you also updated the tools (such
as iostat) and system error log messages  to generate correct device
names by default, since with the removal of usb names, their output
will become something between deliberately misleading and wrong :-)
Yes, I know this opens a stability-of-output can of worms, but being
able to fix these types of reality mismatches is exactly why the next
"Solaris" is being positioned as a major release...

  -John



EOF of legacy bus mice [PSARC/2009/334 FastTrack timeout 06/10/2009]

2009-06-03 Thread John Plocher
+1 on the proposal as-is; I don't want this side discussion to derail
the case, but...

In the unlikely event that anyone is impacted by this,
A) is the source for the driver(s) open/available for other
distros/individuals to put it back, -or-
B) are the driver binaries available in a repo so that they could be
re-installed?

Maybe what I'm asking is whether we need to remove the driver source
from our source tree (like this proposal implies), or if it would be
sufficient to simply remove the packages from the core system
definition so they are not installed/used by default.  Or move the
source (et al) to the contrib (or obsolete or ...) repo? Or...?

Given the age and systems characteristics of bus mice, this may be a
poor test case (and if so, that's OK by me), but, in general, it seems
to me that combining "removing old hardware support" and "open source
OS used by a long tail of different hardware users" is somewhat of a
mismatch.  What if your assertion of  "not used" is wrong? Can a
person affected by the removal /do/ anything about it?

  -John

> Garrett D'Amore - sun microsystems wrote:
>> We'd like to EOF these drivers from Solaris Nevada. ?We don't believe this
>> will have any negative impact on anyone -- the only impact should be the
>> positive result of removing the driver binaries, man pages, and associated
>> source code.



jar file man pages - was Re: trove-2.0.4 [LSARC/2009/262 FastTrack timeout 05/05/2009]

2009-05-19 Thread John Plocher
+-- various people wrote:
|
| ... I remain unconvinced that we (OpenSolaris) should even be concerned
| about stability of Java APIs
|
| ... Do other members see value in having another layer of commitment and
| review beyond whatever is already done as part of the Java community?
|
| ... At a minimum, the definition of stability levels may not be the same,
|
+--

It should be noted that the Java community has a very restricted
formal interface taxonomy, which is a subset of the ARC one.  Anything
covered by a JSR is "Committed, Standard"; /everything/ else is
"Volatile".  There are psuedo-exceptions for in-progress JSR-track
interfaces and for defacto "probably committed, but not JSR track
interfaces", which should (IMO) both be considered Volatile as well,
since the various OpenSolaris project teams and the ARC have no
interest in seriously tracking each and every change that go into
them.

Trying to force fit additional taxonomy levels onto artifacts built by
a community which does not use or value those levels themselves is at
best make-work, and at worst, foolish :-)

  -John



good and bad news on Garrett's case problems

2009-04-15 Thread John Plocher
The Projects.thisweek file which collects the submissions as they come
in, and is where I'd expect to find 213 and following.  It gets moved
to weekly/project.db., and then appended to projects.db
when Aarti runs the LIST.  The tools all know to look there as well as
project.db to find things...

Check the permissions on Projects.thisweek and project.db - if someone
not in group sac wrote to it (i.e, owner/group is "someone,staff"
instead of "someone,sac"), this could prevent the sac
nextcase/onepager tools from writing to the file...

The tools send "help me, something is screwed up" mail to the
sacadmin at sac alias when they detect fubars like this, so there should
be a mail log/list of the "missed" cases.

sactrac2db is read-only with respect to the filesystem - it /only/
writes to the MySQL database.

  -John


On Wed, Apr 15, 2009 at 11:04 AM, James Carlson  
wrote:
> The good news is that I've figured out what's wrong. ?It's our old
> nemesis: /export/sac/Archives/ProjectDB/project.db. ?Garrett's cases
> are not in that file. ?In fact, nothing's been added to the file in
> several days.



20 Questions # 5 update [PSARC/2009/179 FastTrack timeout 03/25/2009]

2009-03-19 Thread John Plocher
>> > ? ? You could talk with the TX team.

As with all the 20Qs, there is significant value in having something
more than an open ended question that teams can't fully comprehend.
Some sort of context (checklist, description, URL, Best Practice,...)
so that the teams can say "hey, that sounds like something our stuff
might or should do" rather than "No, we don't do {TX, branded zones,
zones}, ignore the question - uhm, what is {TX, branded zones,
zones}?".

Even better would be some sort of "answer" - a "how to architect and
design things so they play well with {TX, branded zones, zones}"
document that teams can use to reduce the need for /everyone/ to
interact with the {TX, branded zones, zones} team in order to get a
clue.

Without such a document to disseminate {TX, branded zones,
zones}-clue, you can't expect teams to do anything to play well with
{TX, branded zones, zones} - the default "ignore {TX, branded zones,
zones} because it isn't obvious that it applies to us" pressure is too
great.  Even cryptic ARC 20Q references won't really change behaviors
- although they will increase the tension and conflict that surfaces
in ARC meetings as teams get blindsided with new-to-them undocumented
requirements.

  -John



2009/139 CIFS CATIA Translation Share Property

2009-03-04 Thread John Plocher
On Wed, Mar 4, 2009 at 11:19 AM, J Mcintosh  wrote:
> The translation of a '/' character would be performed if a filename read from
> the file system (eg returned from VOP_READDIR) contains the '/' character.

How would such a file be created in the first place on OpenSolaris?

  -John



GNU Developer Collection [LSARC/2008/776 FastTrack timeout

2009-02-13 Thread John Plocher
>2)  Binutils will be installed directly into /usr/bin
>with no versioning.

Sorry, I'm now really confused.

If there is no versioning, and everything will be installed directly
into /usr/bin, then what is the directory
"/usr/gnu/i386-pc-solaris2.11/bin/" used for and isn't the
"...solaris2.11..." part of that directory name actually an OS version
dependent string?  What happens when OpenSolaris moves from 2.11 to
2.12?  Why should the version of the OS impact the version of the
compiler support tools?  ...etc...

 -John



GNU Developer Collection [LSARC/2008/776 FastTrack timeout

2009-02-12 Thread John Plocher
Why

> usr/gnu/bin/ar
> usr/gnu/i386-pc-solaris2.11/bin/ar=../../bin/ar

instead of

> usr/gnu/bin/ar=./i386-pc-solaris2.11/bin/ar
> usr/gnu/i386-pc-solaris2.11/bin/ar

(i.e., have the symlinks point to the more tightly versioned instance)

I don't have a strong feeling either way, but I wanted to note that at
some point in the future you may wish to upgrade from
usr/gnu/i386-pc-solaris2.11 to usr/gnu/i386-pc-solaris2.12, and the
current scheme doesn't play as well in that scenario...


  -John



GNU Developer Collection [LSARC/2008/776 FastTrack timeout

2009-02-12 Thread John Plocher
>> > usr/i386-pc-solaris2.11
>>
>> Is this strange path necessary? Can't the subdirectories (bin and lib)
>> go directly to /usr?
>>
>> The answer to the first part is no.  The commands in
>> usr/i386-pc-solaris2.11/bin cannot be moved to /usr/bin since they
>> conflict with existing Solaris commands.  It is potentially possible to
>> eliminate them, however, since they are duplicates of commands already
>> installed in /usr/bin and /usr/gnu/bin.

If these existing commands are from earlier versions of binutils,
PLEASE UPDATE THEM instead of leaving them around as an obsolete
nuisance.

  -John



ksh93 update 2 [PSARC/2009/063 FastTrack timeout 02/09/2009]

2009-02-03 Thread John Plocher
Peter Tribble wrote:
>> I know that I'm certainly not happy about ripping out Solaris commands and
>> replacing them with external commands.


Since Sun's management seems to have RIF'd the entire team that used
to maintain those old Solaris commands, it seems clear that *they* no
longer have the same commitment to them as they used to.  In that
case, anything that moves towards a codebase that /is/ actively
supported by people who are experts in the field is rather to be
desired, don't you think?

  -John



Sun Studio C/C++/dbx Collection [LSARC/2009/017 FastTrack timeout 01/21/2009]

2009-01-23 Thread John Plocher
On Fri, Jan 23, 2009 at 6:05 AM, Arieh Markel  wrote:
> I don't understand why a solution
> similar to the multiple jdks would not be applicable and appropriate
> here.

Neither do the rest of us.  It was suggested several times in this
thread - and not always by me :-)

  -John



Sun Studio C/C++/dbx Collection [LSARC/2009/017 FastTrack timeout 01/21/2009]

2009-01-22 Thread John Plocher
On Thu, Jan 22, 2009 at 1:17 PM,   wrote:
>  May I suggest /usr/suncc (short for Sun
> Compiler Collection).


This works for me, though a better long term answer might be
/usr/suncc/$VERSION/, which would allow one to install other unbundled
releases alongside AND for you to be able to gracefully support the
transition from 2008.11 to 2009.xx and 2009.yy going forward, not to
mention providing an architecture for installing Express and full
Studio releases there as well.

Taking a step back, all that posturing about how bundling makes it
different and special doesn't ring true to me - it more like the
unstated focus is on "how do we do this quickly without wasting
engineering resources on it and without being forced to change the
status quo elsewhere in the compiler/studio world."

  -John



Bundled Compiler Collection [LSARC/2009/017 FastTrack timeout 01/21/2009]

2009-01-21 Thread John Plocher
On Wed, Jan 21, 2009 at 11:56 AM, Danek Duvall  wrote:
> No one's suggested that; I'm not sure why you're bringing this up.

Thanks for the clarification - I mistook your comment  "That would allow
/usr/bin to be the path to the bundled compilers all the time" to imply
both PATH and RPATH.  Sorry.

  -John



Bundled Compiler Collection [LSARC/2009/017 FastTrack timeout 01/21/2009]

2009-01-21 Thread John Plocher
On Wed, Jan 21, 2009 at 8:06 AM, Danek Duvall  wrote:
> Couldn't they just set their path to have /opt/xxx/bin before /usr/bin to
> use one of however many unbundled copies they have?  That would allow
> /usr/bin to be the path to the bundled compilers all the time.


In a situation like this (multiple installs of a complex component),
it is best if each instance is installed in its own location, with
RPATH set to that location (using $ORIGIN as needed).  This divorces
the concept of "will it work?" from that of "what is the default".
Hardcoding one particular instance of a multi-install family to only
work in /usr/bin (...) ends up being an architectural blunder.

If /all/ the instances are self contained that way, then choosing one
to be a default is as easy as mucking with symlinks or PATH.  And
changing that default is just as easy.

(In this thread, it seems obvious to me that all the instances of the
Sun compiler are architecturally related: bundled, unbundled, express,
alpha, studio.old, studio.new...)

  -John



Sun Studio C/C++/dbx Collection [LSARC/2009/017 FastTrack timeout 01/19/2009]

2009-01-15 Thread John Plocher
On Thu, Jan 15, 2009 at 11:47 AM, Chris Quenelle  
wrote:
> multiple compilers in /usr

This thread suggested at least two, and I can see more: the bundled
one and the express one; we also have the version that can be used to
compile OpenSolaris -vs- the one shipped by the compiler team, ...

> very interesting challenge to deliver a subset of that product
> into Nevada on a regular basis.

... which is why I suggested coming up with an overarching architecture...

It doesn't have to be an "interesting challenge".   It *could* be as
simple as choosing which packages to co-bundle and install.  The whole
bundled/unbundled, /usr -vs- /opt thing is not set in stone.  You
could come up with another, better scheme.   The problem I see in this
case is that it is built on a lot of history,  it assumes that some
things can and other things can't be changed, and it has some vision
for a future that changes based on who is doing the talking.

My suggestion is to take a step back and ask what the best possible
result might look like.

When /I/ do that, I get something that smells a lot more like the Java
JDK mechanism (everything under /usr/jdk/$version, with /usr/java
being a symlink to the default version) than this current "some stuff
in /usr, but other stuff that is the same, but different in /opt,
depending on what time of day it is in Sun's compiler marketing
department" :-)  Of course, /your/ mileage may vary, may contain nuts,
etc...

  -John



Sun Studio C/C++/dbx Collection [LSARC/2009/017 FastTrack timeout 01/19/2009]

2009-01-15 Thread John Plocher
On Thu, Jan 15, 2009 at 9:34 AM, Chris Quenelle  
wrote:
> In the latest proposal, the bundled compiler binaries are not versioned,
> and neither are the man pages.  If you want multiple versions of the
> compilers, you can install as many different versions of the
> unbundled product as you want.  This case is about choosing one
> version to bundle into the OS as the default compiler.


This case *should* be about setting up a structure that allows for
both the bundled and unbundled compilers to coexist and evolve in a
coordinated way.  The customer doesn't care one bit about whether Sun
considers a particular version to be bundled or not; history shows
that Sun changes its mind often about this sort of thing.  The
architecture shouldn't be tied to such marketing distinctions either.

A scheme like the current studio compilers one would work well:

/usr/suncc/$version/[bin, lib, man, ...]

A separate  should be used to manage the concept of
"default".  Links to /usr/bin, management of a /usr/suncc/latest
symlink, whatever should all be architecturally independent from the
delivery of a versioned compiler instance.

  -John



Laptop Hotkey Support [PSARC/2009/016 FastTrack timeout 01/16/2009]

2009-01-12 Thread John Plocher
[jumping in late after reading the thread...]

> Since there is no generic ACPI specification for other hotkeys, most
> vendors just define their own specific ACPI based hotkey method. This
> case will also add Toshiba specific ACPI hotkey method support for:
> 1. Fn + ESC: audio mute On/Off
> 2. Fn + F1: screen lock
> 3. Fn + F3: suspend to RAM(on S3 capable platform)
> 4. Fn + F8: wireless LAN On/Off
> 5. Fn + F9: touchPad On/Off

So, to use a real world example to flesh out the details:  I don't
have a toshiba; instead I've got an apple mac laptop.  It uses

F1 lower screen backlite
F2 brighten screen backlite
F3 mute
F4 lower volume
F5 raise volume
F6 NUM LOCK
F7 cycle between internal and external screen mirroring -vs-
independent displays
F8 kbd backlite off
F9 lower kbd backlite
F10 brighten kbd backlite
F11 Move windows off screen to show desktop & icons on it
"Eject" eject's cd/dvd
Lid close Suspend to RAM

With this proposal, can this platform be detected and these keys be
set up and used automatically? If not, what needs to be done to make
it work (and who whould have to do the work - you, someone in the OS
developer community, the end user, ???)

  -John



GNU Developer Collection [LSARC/2008/776 FastTrack timeout 01/07/2009]

2009-01-12 Thread John Plocher
On Mon, Jan 12, 2009 at 10:21 AM, Rainer Orth
 wrote:
> I don't buy this

+1 - repeat as needed.

Rainer is raising extremely valid points that directly point at
architectural sloppiness and muddy thinking in these proposals.

It really sounds like someone has decided to create a /usr/compilers/
playground where the compiler team at Sun can stuff whatever private
copies of things they wish, without really dealing with the community
itself.

If micro versions of the GNU binutils need to go in this sandbox, why
shouldn't copies of the "solaris" as, ld, ... commands be put there
also?  What happens if *they* change out from under the studio
compiler?  Is this because you have different rules for Sun stuff -vs-
community stuff, you don't trust the community stuff,  you don't put
the effort into understanding it, or ... ?

The path forward seems to be along the lines of

   Update binutils in (Open)Solaris to the latest; it won't break anything.

This is exactly the same architectural mechanism that allows
us to update the OpenSolaris versions of as, ld, ar, ls, ... every two
weeks in Nevada builds - we trust that those things will evolve in
compatible ways.  We are so sure of this principle that we don't even
bother to mark the components of these weekly releases with their own
explicit version numbers...

  Provide an architectural proposal for /usr/compilers that addresses
Rainer's open questions such that the community (and not just Sun's
compiler group) can maintain things there.  If this is truely a
*compiler* sandbox, where does Sun's Java compiler fit? etc etc etc

  Provide sub-cases for the various compilers (including gnu3 as well
as gnu4...) that show how they will live and evolve in that sandbox.
How multiple versions will coexist, be used, etc.

  -John



LSARC 2008/741 Exuberant CTags Packaging for OpenSolaris

2009-01-06 Thread John Plocher
It can be delivered into DevPro *AND* the resulting packages can be
co-bundled with the Solaris - the result being that there is only one
definitive source for the packages (and their patches and updates...)

OR

Deliver the same source into SFW and Devpro.  The Devpro install
should ONLY install and/or use its own copy on OS versions previous
to OpenSolaris; on OS, it should use the SFW version...

(Realize that the DevPro version will probably need to be compiled
on Solaris8 instead of Nevada so the bits can run on older OS's...)

  -John


On Mon, Jan 5, 2009 at 6:22 AM, Brian Utterback  
wrote:
> Another issue has come up. The OpenGrok case (pending, not yet
> submitted) has a dependency on Exuberant CTags. In fact, this
> dependency is what has delayed the filing. In any case, the desire is
> to have OpenGrok integrate in the SFW consolidation for inclusion in
> OpenSolaris. However, OpenGrok requires Exuberant CTags to run work.
> Since this case proposes to integrate Exuberant CTags into the DevPro
> consolidation, this leaves us with a quandary.
>
> One possibility is to deliver OpenGrok into DevPro. The OpenGrok team
> wants to deliver to OpenSolaris, I don't know how the DevPro people
> feel about this. Another is to deliver Exuberant CTags into SFW. I
> suspect that the DevPro people will still want it in DevPro because of
> the support of prior Solaris versions and other platforms. And the
> last possibility is to deliver a private copy of Exuberant CTags with
> OpenGrok, suitably disguised to prevent users from finding it.
>
> Is there a precedent for this situation? Any idea on the best course
> of action?
>
> --
> blu
>
> "Murderous organizations have increased in size and scope; they are
> more daring, they are served by the most terrible weapons offered by
> modern science, and the world is nowadays threatened by new forces
> which, if recklessly unchained, may some day wreck universal
> destruction."  - Arthur Griffith, 1898
> --
> Brian Utterback - Solaris RPE, Sun Microsystems, Inc.
> Ph:877-259-7345, Em:brian.utterback-at-ess-you-enn-dot-kom
> ___
> opensolaris-arc mailing list
> opensolaris-arc at opensolaris.org
>



GNU Developer Collection [LSARC/2008/776 FastTrack timeout 01/07/2009]

2008-12-20 Thread John Plocher
On Fri, Dec 19, 2008 at 10:32 PM, Dale Ghent  wrote:
>>> /usr/compilers/... for this and future versions of compilers allowing
>> Should we have /usr/interpreters too?
> "But bash, ksh et al are interpreters, too!", some might argue.

> killer of usability?


Any system architecture that allows multiple versions of something to be
installed and used at the same time on the same system must, by necessity,
provide a unique, out of the normative way place to put all those versions.

/usr/compilers is not just for gnu -vs- studio, but also for gnu 3, gnu 4,
gnu 4.1.lefthand.blue, f77, f88, f99 etc.

These directories and their contents are aimed at the user who,
through their own manipulation of PATH, Makefile or script, wish
to use a specific version of a component.  They are also used as
the dependency target for any other delivered packages that
are version sensitive.

For those USERS who don't care about that level of detail, there also
needs to be a way for them to say "I want this specific version to be
the default".  Today, this is done via a symlink in /usr/bin that sets the
default for *ALL* users of the system.

Don't confuse the two locations.  We need both.  /usr/bin for the
unsophisticated user, /usr/compilers (and friends) for cross package
version dependencies, power users and the like.

There are WSARC cases that deal with this whole multiple version
thing in depth.  Find them, read them and use them.

 -John



mailwrapper (PSARC/2008/759)

2008-12-10 Thread John Plocher
On Wed, Dec 10, 2008 at 2:08 PM, Ceri Davies  wrote:
> I really hadn't intended to suggest that "man sendmail" should show
> anything other than exactly what it does now.

A "good" result might be:

man sendmail

Shows the man page for /usr/sbin/sendmail (aka mailwrapper), which has a
reference to the "real" /usr/lib/sendmail and other MTAs.

man -s1M sendmail

Shows the "real" sendmail manpage, with a reference to mailwrapper.

man -s1M 

Shows the "real" , with a reference to mailwrapper


Alternatively, you could take a hint from VFS/mount (via the intro(1M) manpage:

 Because of command restructuring for the Virtual File System
 architecture, there are several instances of multiple manual
 pages that begin  with  the  same  name.  For  example,  the
 mount, pages - mount(1M), mount_cachefs(1M), mount_hsfs(1M),
 mount_nfs(1M),  mount_tmpfs(1M), and mount_ufs(1M). In  each
 such case the first of the multiple pages describes the syn-
 tax and options of  the  generic  command,  that  is,  those
 options  applicable  to all FSTypes (file system types). The
 succeeding pages describe the functionality of  the  FSType-
 specific  modules  of the command. These pages list the com-
 mand followed by an underscore ( _ ) and the FSType to which
 they pertain. Note that the administrator should not attempt ...


  -John



mailwrapper (PSARC/2008/759)

2008-12-10 Thread John Plocher
> You're probably right.  If there are no objections, I can install the
> mailwrapper binary at /usr/lib/sendmail.
>
> What does that mean for the manpages, specifically wrt section numbers
> for mailwrapper?

Except...

This thing *isn't* sendmail, it is a placeholder/proxy for the hardcoded
exec() strings stuffed into other applications.

What should the man pages say?  This thing called sendmail isn't really
sendmail, it is a proxy for the real sendmail, which now lives over ->there->?
As it is now, the man page for mailwrapper does a pretty good job of
describing things.

Without the mailwrapper name (which is how the rest of the world identifies
this feature), isn't this just asking for confusion - yet another place where
OpenSolaris is arbitrarily different in a small way from the rest of the *nix
world? doesn't "being the same as other places"  trump "but we could have
invent something better if we tried"?

  -John



mailwrapper (PSARC/2008/759)

2008-12-09 Thread John Plocher
Danek:
  What if i wish to use an alternative MTA, like postfix or exim?  Renaming or
removing the sendmail binary is the "wrong" way to do that, as is hardcoding
"sendmail" (or postfix or exim) into the various client programs.

JBeck:
 I *am* confused about mailwrapper being a symlink to sendmail, rather than
being something that would work even if sendmail was not installed at all,
which seems to be the whole point of having a mailwrapper-like abstraction
in the first place.  Or am I missing something obvious here?

  -John

On Tue, Dec 9, 2008 at 4:11 PM, Danek Duvall  wrote:
> On Tue, Dec 09, 2008 at 04:03:19PM -0800, John Beck wrote:
>
>>   The common BSD distributions (FreeBSD, NetBSD, OpenBSD) include a
>>   program called "mailwrapper" which allows for easy, packaging-safe
>>   selection of the shell-command level interface to the default system
>>   MTA.  /usr/sbin/sendmail is a symlink to the mailwrapper binary, and
>>   the "real" sendmail is installed somewhere else (on the *BSDs,
>>   this is /usr/libexec/sendmail/sendmail).
>
> What's the point of /usr/sbin/mailwrapper?  Why isn't it just installed as
> /usr/sbin/sendmail?
>
> Danek
> ___
> opensolaris-arc mailing list
> opensolaris-arc at opensolaris.org
>



findbugs [LSARC/2008/642 FastTrack timeout 10/27/2008]

2008-12-09 Thread John Plocher
On Tue, Dec 9, 2008 at 11:01 AM, Tom Childers  wrote:
>... then links may get changed and cause things to break.

> We asked the team to adopt the convention established for junit, LSARC/
> 2008/633, similar to /usr/lib:
>
>/usr/share/lib/java/junit.jar link to most recent version
>/usr/share/lib/java/junit-4.5.jar
>/usr/share/lib/java/junit-3.8.2.jar
> ...etc.
>
> However, if they place all the findbugs pieces, like
> annotations-1.3.4.jar, in /usr/share/lib/java, then we have the
> situation that multiple projects who require and deliver the same
> component can overwrite each other. annotations.jar could be changed
> to link to a different version, breaking the functionality of
> something that is already installed.


Shouldn't those  depend directly on the versioned item and
NOT on the convenience link?  That is, if I depend on junit, I should either

A) link to /usr/share/lib/java/junit.jar  IFF the interface stability
I care about
 is Committed,
or
B) link to /usr/share/lib/java/junit-4.5.jar if the interface
stability is less stable

If I link to "junit.jar", but junit's stability is, say, Volatile,
then I have, as they
say, just screwed up.  If junit (the convenience link) evolves incompatibly
out from under me, my application breaks immediately.  Braap.

Danek Duvall wrote:
> Those links need each to be delivered exactly once on a system, by just one
> package.

+1

I'd add, that package is delivered by the team that "owns" junit, and they
get to decide, based on the promises they made when exporting the
junit interface stability, when and if the convenience link(s) should change.
If they promised "Committed", then they *must* do the diligence to ensure
that the new version of junit contains absolutely no incompatible changes.

If they promised Committed, then I should be able to depend on it being
Committed, and use of the convenience link is safe.  Otherwise, for all
of the perturbations of "otherwise", it isn't.

  -John



Alias for the PSARC chair?

2008-12-03 Thread John Plocher
On Wed, Dec 3, 2008 at 8:21 AM, Alan Coopersmith
 wrote:
> I don't think that information was ever copied out to the external
> site though.


It intentionally wasn't - they are internal Sun aliases, not general
opensolaris ones...
If we want something like it for OS.o, we should spend some time
figuring out what
it is we really need -  reusing Sun Internal aliases is probably not
the best answer.

  -John



Add sparse file support to cpio [PSARC/2008/727 Self Review]

2008-11-24 Thread John Plocher
On Mon, Nov 24, 2008 at 3:00 PM, Don Cragun  wrote:
> But this uses ustar/pax format archives.  That won't satisfy the
> customer for that filed the escalation to get this fix.  They insist on
> a fix using cpio format archives.


With this fix, these archives will no longer be "cpio format
archives", but instead
they will be "Sun Proprietary CPIO format archives".

That is, the customer will not be able to extract "holey" files as holey files
on non-Sun systems, rendering this a proprietary solution.

Is there a plan to add this support to non-Sun archivers?

  -John



2008/532 NWAM Phase 1 - plocher followup from inception review

2008-10-31 Thread John Plocher
[Summary: I added the following to the case's issues file:

jmp0:   I have an uncomfortable lingering impression that NWAM
 equates to a completely disruptive change in how networking
 configuration is performed (was /etc/files, SMF services and
 related admin commands, now is NNWAM cli/gui and
 NWAM-library-aware commands only).  This is problematic for
 two reasons:  It invalidates a lot of existing sysadmin
 knowledge, scripts and tools, and it means that the
networking system can't be extended without first extending
NWAM (i.e., we can't use vlans now because NWAM doesn't
support vlans yet, extrapolate for honeycomb and other
new technologies...)
]

James Carlson wrote:
> John Plocher writes:
>> [reduced audience]
> 
> It might be worthwhile to have these sorts of basic discussions on the
> project's mailing list.  If you're having doubts about it, then I
> suspect it's possible that others are having similar doubts.  Getting
> those things out in the open in one place -- and answered there --
> would be much more practical for everyone involved.

I was trying to be sensitive to all of Renee's time I've already
taken.  I just reposted the previous thread to the wider alias.

>> if (location change needed) {
>> save_current_config_as(current_location);
>> apply_new_config_from(new_location);
>> }
> 
> Sort of.  I think the right answer is that utilities that modify
> "saved" configuration write that data into the current location
> storage.
> 
> That removes the need for save_curent_config_as (and also eliminates
> any question of how temporary changes [e.g. ifconfig] factor in).

I think we are saying almost the same thing, but (again) I wasn't
clear :-(

Today, the "current location storage" is /etc/files, smf properties
and the like, and this project defines a "Repository" under /etc/nwam.

I was thinking of save_current_config_as() as the "smurf" function
that JBeck mumbled about, and not a dynamic probe for temporary
changes:

 current_config_as() {
 incorporate contents of /etc/hostname.*,
 /etc/resolv.conf, /etc/net/ipsec..., ...,
 into whatever persistent Repository
 is used by NWAM.
 }

I was explicitly not implying that tools should write directly into
the NWAM repository.


> A change in location will cause a loss of any temporary changes you've
> made.  That's exactly what we expect.

The difference may be in what you think of as a temp change.

Is it running ifconfig?  (probably)
How about editing /etc/resolv.conf (I'd hope not)
Creating a /etc/hostname.bge0 file (ditto)

Permanent in my mind is "if I do something, and then reboot,
will that something persist?"

Temporary is "no, it won't".

> As things stand now, if you edit /etc/ configuration files and you use
> NWAM, you're on your own.  You're not supposed to do that. 

This fails the "least surprise" test, and is IMHO a level0 issue
because it says that the traditional admin way of doing things is
now forbidden.  I'd rather see something that is evolutionary than
disruptive here.

> If you store something outside of a "location," and the location
> mechanism can overwrite it, then what you've done is temporary.

or there is a bug/hole in the architecture or design of Locations.

The Locations mechanism we have in this case seems to be creating a
new, parallel set of persistent admin config data storage that
conflicts with the existing ones - most of which are defacto committed
interfaces.

Speaking from experience, this is not an area where being disruptive
is a good thing.  Adoption and use of NWAM on servers will be a
non-starter if using it means admins have to start from scratch to
relearn network admin...


My simplistic /conceptual/ view of Locations started with something
simple (a directory like .../Locations/foo) and a list of the various
config files that needed to be managed.

Changing Locations would then entail
 copy/rdist/whatever the list of managed files from /etc to
 the .../Locations/foo subdir, then
 copy the stuff from .../Locations/new back out into /etc, then
 kick the various smf network services to tell them to restart.

Adding support for vlans (etc) would simply mean adding the list
of vlan config files and smf "kicks" to the list of things managed by
NWAM.  The "need to invent a GUI", "need to invent auto-triggers",
and other related complexity becomes a followon or parallel effort.
It seems easier to understand and build auto-whatever on top of a
solid "manual location switching" base than it is to boil the whole
ocean trying to do an interconnected ever

2008/532 NWAM Phase 1 - plocher followup from inception review

2008-10-31 Thread John Plocher
I followed up to the inception review with an offline
email to Erik, Jim and Renee that, in hindsight, may be
of interest to all.  The conversation is attached.

   -John
-- next part --
An embedded message was scrubbed...
From: John Plocher 
Subject: Re: PSARC/2008/532 NWAM
Date: Thu, 30 Oct 2008 14:50:55 -0700
Size: 5512
URL: 
<http://mail.opensolaris.org/pipermail/opensolaris-arc/attachments/20081031/388cb8da/attachment.nws>
-- next part --
An embedded message was scrubbed...
From: Erik Nordmark 
Subject: Re: PSARC/2008/532 NWAM
Date: Thu, 30 Oct 2008 15:26:59 -0700
Size: 9502
URL: 
<http://mail.opensolaris.org/pipermail/opensolaris-arc/attachments/20081031/388cb8da/attachment-0001.nws>
-- next part --
An embedded message was scrubbed...
From: James Carlson 
Subject: Re: PSARC/2008/532 NWAM
Date: Fri, 31 Oct 2008 12:10:08 -0400
Size: 11381
URL: 
<http://mail.opensolaris.org/pipermail/opensolaris-arc/attachments/20081031/388cb8da/attachment-0002.nws>


GNU binutils version 4.3.x [PSARC/2008/656 FastTrack timeout 10/30/2008]

2008-10-28 Thread John Plocher
Stefan Teleman wrote:
> Exactly -- and i'd like to integrate a recent version of binutils

I completely agree.

> We already have 
> a GCC (3.4.3) which uses binutils 2.15.

Which would not be harmed at all if you removed binutils 2.15 and 
replaced it with something newer.  The gas/binutils maintainers seem 
to feel that 2.19 is an upwardly compatible drop in replacement for 
2.15, the ARC approved stability for binutils 2.15 is "External", 
which allows this type of upgrade/replacement without too much effort,
etc etc etc.

The only issue is that binutils 2.15 is in /usr/sfw, which is the 
wrong place for binutils 2.19.  But, /usr/gnu/gcc4/... is also the 
wrong place.  The *right* place is in /bin and/or /usr/gnu/, just like 
the rest of the GNU stuff.


> This ARC Case [ and any subsequent, related ARC Cases ] does not [ do not ]
> address the existing GCC 3.4.3 and/or binutils 2.15. There have been no 
> change 
> requests for either the upgrade, update, or removal of either GCC 3.4.3, or 
> binutils 2.15.
> 
> Any change requests pertaining to either GCC 3.4.3 or binutils 2.15 must 
> follow 
> the standard operating procedure for submitting change requests. Future, 
> unspecified ARC Cases may address binutils 2.15 and/or GCC 3.4.3, pursuant to 
> the existence of relevant change requests.
> 
> Because of the consequences and complexity of updating/upgrading/removing 
> either 
> GCC 3.4.3, or binutils 2.15, comprehensive scrutiny and review of any such 
> change requests, addressing either GCC 3.4.3 or binutils 2.15, will apply. 
> Consensus buy-in from all the Consolidations currently using GCC 3.4.3 and 
> binutils 2.15 will be required.

What consequences and complexity?

You are proposing the inclusion of binutils 2.1[79], so the scope of 
the ARC case is properly "is this the right way to do this?".  As I 
said above, this proposal isn't the right way.  It would introduce a
wart into the system - a complete copy of binutils tied to a compiler 
version and inaccessible to the users of the system (or alternatively,
accessible, but in a very strange location).  Both of these violate 
1991/061's packaging and delivery advice.

   -John



LSARC Eclipse For Java Developers 2008-626 Case Materials - Checklist

2008-10-28 Thread John Plocher
Michael Kearney wrote:
> 1.0 Project Information
> Eclipse is one of the most popular open source Java IDE.

text based next time please, not html.  The tools that parse the 
structured document don't render html first...

   -John



DERAILED Re: GNU binutils version 4.3.x [PSARC/2008/656 FastTrack timeout 10/30/2008]

2008-10-27 Thread John Plocher
 From my 1:1 offline discussion with Stefan:

Darren J Moffat wrote:
> I'm derailing this case on the grounds that it is not obvious and also 
> the volume of email traffic involved in trying go get clarifications.
> 
> The non obvious parts to me are the following:
> 
>   djm-0 Why we need a gcc4 subdir in /usr/gnu/

Because the submitter feels that there may be subtle
and/or unintended differences between binutils 2.15
(found in snv_99) and binutils 2.17 (used by gcc4 and
proposed here) that are impossible to determine without
performing a complete set of regression tests.

(2004/742 marks binutils 2.15/gcc3 as "External")

This is an issue in the submitters view because the
shipping gcc3 uses binutils 2.15, and is used to build
S10;  replacing binutils 2.15 out from under it and
replacing it with 2.17 would require some large but
unspecified regression testing of the gcc3-built S10
binaries.

(left unstated is why the combo of OpenSolaris, gcc3 and
binutils 2.whatever has any bearing at all on building the
S10 sources...)

> 
>   djm-1 What the connection between this case and gcc4 really is
>   I can't actually find any information on a dependency between
>   GCC 4.3.x on any version of GNU binutils.

gcc is built with a hardcoded full-path dependency on $AS and $LD.
These hardcoded executables are used both at source build time and
at delivered-binary runtime.

In order to support a system that has both "gcc3/binutils2.15" and
"gcc4/binutils2.17", there needs to be a way to have both versions
of binutils resident at the same time.  Thus the (imo poorly named)
/usr/gnu/gcc4/... directory which will contain binutils2.17.

> 
>   djm-2 Without seeing the GCC4 case I don't even understand why
>   this issue exists at all since the current GCC 3.4.3 on Solaris
>   uses /usr/sfw/bin/gas and /usr/bin/ld.

/usr/sfw/bin/gas is from binutils 2.15.  Upgrading it in place
to binutils 2.17 means that the existing gcc3 would (gasp!) use
a different assembler (...), which might or might not cause problems.

> 
>   djm-3 Wither or not binutils (modulo any bugs in a particular
>   version) are so unstable that we need to support multiple
>   versions and if this is only because of a desire to support
>   multiple versions of gcc.

I think you got it - this is to support multiple versions of gcc.

IMO, the stability (or not) of binutils has not been characterized
by the project team except to state that *any* change would require
a complete and time consuming regression test.

> 
>   djm-4 If the dependency between binutils and GCC4 is build time
>   only or architectural and if there are possibilities for a
>   workaround particularly given the following:
>   
>   http://www.gnu.org/software/gcc/faq.html#gas
> 
>   I also happen to know that it is possible to reconfigure
>   which assembler program gcc uses by giving full paths in the
>   specs file.  I haven't checked that with GCC 4.x but it
>   certainly still works with the 3.4.3 that we ship with Solaris.

There probably is some benefit in decoupling the "build gcc" 
dependencies from the "run the resulting compiler" ones, but
only if there are substantive differences between binutils.old and 
binutils.new.  Given the "External" stability classification
articulated in 2004/742, I don't believe we need to support multiple 
versions of the binutils 2.xx release cycle on the system.

> 
> I want to see the documentation that shows exactly what versions of GCC 
> and GNU binutils work together (I looked and couldn't find it).

I /believe/ they ALL are intended to work well together; the only
concern is regression testing to find subtle and/or unexpected bugs
to make the transition from gcc3/binutils2.15 to gcc3/binutils2.17
easier.

   -John (not the project team!)




GNU binutils version 4.3.x [PSARC/2008/656 FastTrack timeout 10/30/2008]

2008-10-24 Thread John Plocher
Stefan Teleman wrote:
 > Ian Lance Taylor wrote:
 >> and dozens and dozens more wrote...

Rather than playing verbal games, is there anything productive we can 
do here?  Ian seems to be saying that the GCC/G++ development team has 
indeed signed on to provide a stable-over-time ABI for their compilers.

This is *great* news, although it seems that it really isn't new.
Unfortunately, the G++ ABI is different than the SunStudio C++ one,
which means developers still can't reuse libraries across the two 
compilers.

What needs to be done going forward here?  Should we try to get the 
two compilers to align with a future ABI (or ...?), does this allow
us to recraft our compiler advice or rethink our stance on shared 
libs?  I don't know (and I don't think we'd agree on a single set of 
answers, at least not at first), but it seems to me that it would be 
better to think proactively and positively than to complain about 
spilled milk.

> Because:
> A B C D E ...
> f) i would like to keep everything as simple as possible. 

Myself, I think you are building unneeded complexity into "V4" on the 
uncertain presumption that the world will come to an end compatibility 
wise with "V5". "As simple as possible" doesn't seem to align well 
with "but I refuse to believe that the GNU developers value 
compatibility as much as I do"...


Stefan Teleman wrote:
 > Since you seem convinced of the opposite, please explain to PSARC
 > exactly how is  GCC going to find its assembler executable, at
 > run-time, after pkgadd, when the path to the assembler executable
 > was hardcoded at build time to:
 >
 > /builds2/steleman/ws/sfwnv-gcc4/proto/root_i386/usr/gcc4/bin/gas


Maybe explaining it to Stefan first might be better? :-)

The answer seems obvious - either there is a flag day for the 
gatekeepers or you need to add a workaround to the compiler.

Part of handling a flag day might require the installation of
a new binutils, which seems to be the solution you found, though you 
seem to be making it harder than usual - have you actually *asked*
one of the gatekeepers how they would recommend you handle this?

Another approach might be to add a way to override the hardcoded
path names via an envariable or command line flag, allowing you
to deliver a compiler that #defines /bin/gas but (for the build 
process only) *uses* /builds2/steleman/.../bin/gas.
(BTW, this all seems to me to be a natural and normal way to
handle the bootstrapping of a new compiler - why is it coming
across as being so hard?)

   -John




Open Review: Changes to the ARC Fast Track and Full Review approval process

2008-10-22 Thread John Plocher
In an attempt to be proactive and clarify what the ARC expects
to happen in the rare situation where a case is submitted, but
nobody actually reviews it, PSARC developed the following
cross-ARC case approval process update.

Since it started as a potential Sun-internal staffing and
resource issue, this policy was developed as part of "closed
PSARC business".  During the last rev of the draft policy,
PSARC took steps to generalize the document so that it could
be openly discussed and become part of the open ARC community.

By intent, the process seeks to be disruptive in the face of
this failure; the alternative of silently allowing un-reviewed
projects to slip through the cracks is extremely undesirable.

Very few projects should fall under this process; in an ideal
world, it would never get used.

Comments and discussion from the OpenSolaris and Sun ARC
Communities are welcome, but note that follow-ups are to
the OpenSolaris arc-discuss list.

-John

Todo:
Publish the new quorum/approval process to arc-discuss at os.o
  and sac-review
Update the fasttrack handbook to include the specifics noted
  below
Update the Licensee handbook to enumerate the sponsor's
  new responsibilities



Changes to the ARC Fast Track and Full Review approval process
==

Context:

What we're trying to get at is the lack of quorum.
We've implicitly decided that fast-tracks can get by
with a reduced quorum, which is where the "+1"
criterion comes from in determining adequacy of
review.  But full cases still need a full quorum.

In response to several recent cases that were submitted, timed
out, and were approved *without* anyone actually looking at the
material, the ARCs have created a new ARC Case Status value,

 "closed denied not reviewed"

This new status value will be applied to Fast Track and Full
Cases that are submitted and have materials for review, but were
unable to attract the attention of any ARC member to actually
review it.

This will be measured by the absence of email discussion for a
Fast Track over an explicitly extended review period.  Full case
owners are responsible for ensuring a minimum of two members
actually reviewed the case.

An email/issue that simply affirms "I read the materials and
don't have any issues" satisfies this review intent.

Cases that are "closed denied not reviewed" can be reopened by
any ARC member who is willing and able to gather the
people-resources needed to actually perform the review.  Line
management may need to reconsider staff resource allocations in
order to provide review resources.

The appeal path is to simply allocate the needed review resources
and reopen the case.

Unless and until the case is reopened and successfully reviewed,
the "closed denied not reviewed" status is to be treated exactly
like "closed denied" - the project can not integrate.



Fasttrack handbook changes:

http://www.opensolaris.org/os/community/arc/arc-faq/arc-fasttrack-handbook/

7) The Proposal Is Finalized for the Project and the Case Is Closed

 At the end of the day assigned as the expiration date, the
 case sponsor needs to determine whether or not the case was
 actually reviewed by any ARC members or if there are any
 outstanding issues that would keep the case from being
 "closed approved".

 If the Case's timeout expires without any ARC members
 submitting email comments (including "+1" affirmations), the
 sponsor needs to extend the case timer by 1 week and to work
 explicitly with both the project team and the ARC membership
 to foster the required review engagement.  If, after doing
 this, the case times out again, it is "closed denied not
 reviewed".

 (If new review resources are found after closing a case this
 way, an ARC member who will be reviewing the case reopens it
 by setting the case's IAM:Status back to "waiting fast-track
 MM/DD/YYY", and performing the usual review: submit
 email/issues, discuss during ARC Business, etc, after which
 the case is either approved or derailed into a Full Case.)

 Sun Internal note: If no resources can be found/allocated,
 the issue of ARC Staffing needs to be brought up at the
 appropriate VP and CTO levels.

 If there has been sufficient review activity, and no ARC
 member has derailed it, then, once any outstanding issues are
 resolved, the sponsor finalizes the proposal for the case by
 ensuring that an accurate description of the proposal that
 was agreed upon as a result of the email discussion exists in
 a single file in the case directory (usually named
 "proposal[.txt]"). The sponsor then announces the
 finalization and approval of the proposal to the project team
 and the ARC via email and officially closes the case by
 updating its status (in

Building a coherent and comprehensive Java development environment on OpenSolaris (was Re: findbugs [LSARC/2008/642 FastTrack timeout 10/27/2008])

2008-10-22 Thread John Plocher
Jyri Virkki wrote:
> But, I find Java apps/libraries to be a special case. Most Linux
> distros have historically not been very Java friendly. ...
> 
> Sun has a certain fondness of Java though, so it makes sense to make
> OpenSolaris the best platform for Java development & deployment.
> Making useful Java tools available in IPS helps that goal.

I agree with Jyri here (thus the Subject: change), but I'm concerned 
that we are not in fact building a coherent and comprehensive Java 
development environment on OpenSolaris as much as we are throwing a 
random heap of things together with little to no regard for the 
relationships between them or how they are actually used.

In other words, if we *do* end up with OpenSolaris being a good Java 
development environment, it will be by happy coincidence rather than 
by intentional design.

So, assuming that findbugs is intended to be a part of a larger "Java 
Friendly" environment, where is the doc that lays out the larger plan 
for doing this?  What other related commands/tools/libs are needed? 
What relationships exist between them? etc etc etc

   -John



Junit [LSARC/2008/633 FastTrack timeout 10/21/2008]

2008-10-21 Thread John Plocher
Jim Walker wrote:
>> If 1, why are you doing something that isn't well aligned with
>> the known use-case for the component, and if 2, how will you
>> actually do it?
> 
> Is Maven the "known use-case"?


Not really - the use case is that most every java thing that uses 
junit, depends on a different version of junit being there.  Some
go as far as bundling junit.jar with their own stuff just to make 
sure, but if we want people to use the one on OpenSolaris, we need to 
allow for the fact that people will want multiple versions.

Sorry for being too succinct.

   -John



Junit [LSARC/2008/633 FastTrack timeout 10/21/2008]

2008-10-21 Thread John Plocher
Mark Martin wrote:
> On Tue, Oct 21, 2008 at 3:08 PM, Jim Walker  > wrote:
> 
> In addition, OpenSolaris users are searching for
> various versions of Junit packages already:
> http://muskoka.sfbay/~sch/pkg/search.html
> 
> 
> Is that information openly exposable?  

The page is on an internal server, so no.  On the other hand,
some context might be useful - it is a list of ~10k search
terms collected over 6 months, with 30 (like top, nano, kde and 
tcpdump) getting a hundred to several hundreds of requests, ~350
getting between 10 and 99 requests, and the rest (~9500) getting
less than 10 requests each. (Much of this long tail seems to be
made up of spelling variations, wildcards and noise...)

junit in all its various spellings got 14, ranking it somewhere
around 250 on the list.

   -John



Integrate gbm (gnu-dbm) into Solaris [PSARC/2008/645 FastTrack timeout 10/28/2008]

2008-10-21 Thread John Plocher
Dean Roehrich wrote:
> On Tue, Oct 21, 2008 at 01:46:19PM -0700, Mike Oliver wrote:
>> Peter Dennis wrote:
>> [...]
>>> Example man page in the case's materials directory.
>> Will man page(s) be delivered to the target system?  The Exported
>> Interfaces table makes no mention of them.
> 
> The manpages belong in the References section.
> 
> Or so I've been told,
> 
>   "Looks mostly good -- one nit: man pages aren't actually interfaces.
>   But we'll collectively pretend we didn't see that.  ;-}"
>   -- James Carlson, 13 June 2008, LSARC/2008/373.


Man pages are not exported interfaces, but they certainly import (or 
adhere to) the interfaces defined by the man page project 
(directories, formats, names...)

Having them enumerated in a pkg manifest is usually sufficient, 
especially if they are going into the "right places".  The project's 
spec needs to note what those "right places" are in this situation; 
the easy place to do this is one line in the interface table :-)

   -John





Junit [LSARC/2008/633 FastTrack timeout 10/21/2008]

2008-10-21 Thread John Plocher
Jim Walker wrote:
> We also understand the problem where several open source projects depend
> on older versions of Junit and don't plan to update their code to use
> the newer version. We felt it was best to start by porting the current
> version and revise it as new releases are made available. Then, look at
> adding additional older versions that are frequently used/requested.

There needs to be something about how you intend to handle these
newer versions.  The canonical choices are:

1) overwrite the old version with the new, thus only having one
installed at any given time, or

2) have some directory structure/pkg architecture to support the
unambiguous installation and use of multiple versions on a
single system simultaneously

If 1, why are you doing something that isn't well aligned with
the known use-case for the component, and if 2, how will you
actually do it?

   -John





PSARC 2008/625 - Streamline ARC 20Q

2008-10-17 Thread John Plocher
James Carlson wrote:
> John Plocher writes:
>> they need help
> 
> It sounds to me like you might be expecting at least some teams
> (perhaps "many") to be fully conversant in all of the Best Practices.
> I'm not sure that's realistic.

We already expect project teams to be conversant in cstyle, compiler 
usage, webrev, hg, ddi/dki, posix, etc etc etc.  What is special about 
the things enumerated in the BP/P archives that makes them topics that 
teams aren't expected to be conversant in?  What can we do to make 
them more relevant?

I don't want to imply that it is all on the project team's plate - if 
there is ignorance or confusion, those of us who are more ARC-aware 
need to be accessible and willing to step in and help.

(BTW, this is why I keep pushing to get someone from *every* project 
team to be an ARC member - the more people that get the ARC disease, 
the better the quality of the projects we produce, and the easier 
their review becomes.)

So, yes, I do expect every team working on OpenSolaris to have a
reasonable amount of clue as to the impact of their project on the 
systems we all are building, including the content of the BP&P 
repository.  Maybe not "fully conversant", but certainly more than 
"completely naive and ignorant" as Scott's paraphrase implied.  If a 
team has not taken the time to figure out how their project will 
impact the system they are modifying, they have no business 
integrating into it*.

The failure mode that I fear is that nobody bothers to answer the "do 
you know what the ARC BPs and Policies are and how they apply to your 
project?" question, the integration-quality of our OS goes to hell and 
the ARC job becomes one of trying to reverse engineer and remediate 
every project.  Uugh.

   -John

[*] As I reread this, it seems as if I'm trying to make it harder for 
everyone to do simple things.  As the FOSS Checklist discussion 
indicated, there really needs to be something for the multitudes of 
"we just want this to be available on OpenSolaris" porting projects so 
that we don't unwittingly force them into an inappropriate integration 
scenario.  Some things can and should be able to ignore BP&Ps, and we 
owe it to these project teams to help them understand which ones and 
when they should.  If we did this right, Scott would instead get a 
response like "Yes, we fit into exception pattern A-2".



PSARC 2008/625 - Streamline ARC 20Q

2008-10-17 Thread John Plocher
Scott Rotondo wrote:
> but you still have to spot-check the answer.

Whose "job" should it be to do the spot checking?

The ARC?  The C-Team?  The Project Manager?  The PAC?  Someone's 
Management Chain?  You?  Me?  The Project Team?

Even harder, what should be done if/when such checking uncovers a 
lapse or disconnect?  The whole question of "who enforces it?" falls 
apart if/when project teams take themselves out of the loop of 
responsibility.  This whole tower of cards we are building (aka the 
distributed and collaborative development of the OpenSolaris operating 
system) is predicated on everyone in the development chain playing by 
the rules.

In my book, a project team that utters that sequence of phrases is 
really telling the world that they need help, that they are not yet 
able to do the job being asked of them - which is to develop software 
for a world class operating system.  We need to find better ways of 
working with them to help enable them to do better.

   -John



PSARC 2008/625 - Streamline ARC 20Q

2008-10-10 Thread John Plocher
Bart Smaalders wrote:
> John Plocher wrote:
>> Garrett D'Amore wrote:
>>> This should be in a document tree that is referenced by this 20Q.
>>
>> Like the ARC Policy and Best Practices repository?
>>
>> Maybe an explicit reference would be good: "Please provide detailed
>> rationale for any ARC Policies and/or BPs you don't adhere to".
>>
>>
>>  -John
> 
> So where are all the ARC policies and BP clearly spelled out, such
> that someone could evaluate their project's compliance?

http://www.opensolaris.org/os/community/arc/policies/
and
http://www.opensolaris.org/os/community/arc/bestpractices/

> 
> Telling someone to read the last 17 years of ARC cases before submitting
> a case is not very practical, and does nothing to simplify ARC
> processes. Attempting to use the Socratic method to tease out potential
> areas of non-compliance via the 20q obviously doesn't work very
> well either.

I agree, which is why we have put some effort into asking teams that
produce systems that generate these rules and requirements to write
them up in a form that is more easily usable.

> 
> If the ARC(s) were to modify their procedures such that all policies
> had to be enumerated in a single living & versioned document, 

We've had the policy and bp repository for almost a decade now...

  -John




PSARC 2008/625 - Streamline ARC 20Q

2008-10-10 Thread John Plocher
Garrett D'Amore wrote:
> This should be in a document tree that is referenced by this 20Q.

Like the ARC Policy and Best Practices repository?

Maybe an explicit reference would be good: "Please provide detailed
rationale for any ARC Policies and/or BPs you don't adhere to".


  -John



PSARC 2008/625 - Streamline ARC 20Q

2008-10-10 Thread John Plocher
Edward Pilatowicz wrote:
> ... a bunch of good stuff, including:

> hence, i think it would be benificial to have projects answer the
> following questions wrt zones integration:
> 
> * is the functionality delivered by this project accessible, by default,
>   directly from within non-global zones?
> 
> * is any configuration, state, and/or statistics used by this project
>   maintained on a per-zone basis?  Do any tools for accessing this
>   information allow for per-zone views of this data?
> 
> * does any functionality deliverd by this project expose information
>   about the global zone (or other non-global zones) to a non-global zone?



Might I suggest that instead of adding bulk back to the 20Qs, 
that you or the zones team come up with some sort of "design 
pattern for Zones", "zones howto/bestpractice" or similar 
that encompasses both these sorts of questions as well as 
the "answers" that are expected to go along with them?

For instance, what does it mean (in terms of what I need to 
do in my project) to be "accessible, by default, directly 
from within non-global zones"?  etc etc etc

  -John




PSARC 2008/549 Apache Standard C++ Library [ updated addendum 1 ]

2008-10-09 Thread John Plocher
Garrett D'Amore wrote:
> PS: As an aside, I think the revised section 4.6 offers no meaningful 
> value to the case, and I probably would have just nuked it.


I see 4.6 as saying that 4.1-4.5 are now official ARC statements of intent.

  -John



DTRACE JNI reference

2008-10-07 Thread John Plocher
Ref: using dtrace library/java interface...

See PSARC/2006/054 - DTrace JNI Binding

In particular, the mail log/spec:

=

A. Introduction

The DTrace framework (PSARC 2001/466) allows Solaris users to instrument
the system in literally hundreds of thousands of places, and trace virtually
any data on the system. It has been used with tremendous success both
internally and at customer sites, but an inhibitor to adoption for some
potential users has been the lack of a GUI. This PSARC case proposes the
introduction of JNI bindings for DTrace that will allow for GUIs for DTrace
to be written in Java.

This case depends on the following PSARC case:
  2006/053 org.opensolaris.os: Public Java classes for Solaris
The paths listed below adhere to the current state of that case and will
be updated according to the final version of that case.


B. libdtrace_jni interfaces

This case proposes to introduce a set of Java APIs in the
org.opensolaris.os.dtrace package and a JNI library to
support those APIs. Those APIs constitute the extent of external interface.
For ease of viewing, the javadoc has been included in the case directory
in a directory named 'javadoc' in both html and pdf formats.


C. Stability

  _
  | Interface   | Classification| Binding |
  |_|___|_|
  | org.opensolaris.os.dtrace package   | Evolving  | Patch   |
  | /usr/lib/java/dtrace.jar| Evolving  | Patch   |
  | /usr/lib/java/javadoc/dtrace| Evolving  | Patch   |
  | |   | |
  | /usr/lib/libdtrace_jni.so   | Project Private   | Patch   |
  | /usr/lib/libdtrace_jni.so.1 | Project Private   | Patch   |
  |_|___|_|






Solaris host-based firewall [PSARC/2008/580 FastTrack]

2008-10-02 Thread John Plocher
Tony Nguyen wrote:
> I agree that ipf command features can't be replaced by the current SMF. 
> In this specific example, the three ipf commands can be replaced by a 
> single svcadm restart command so it was really tempting :)


It may be useful to use this as an example:

   ... BTW, the svcadm interface invokes the following commands,
   which illustrate the use of ipf...


   -John



EOF of PostgreSQL 8.2 in Solaris [LSARC/2008/616 timeout 10/08/2008]

2008-10-02 Thread John Plocher
James Gates wrote:
> There are 3 popular commands in /usr/postgres/8.2/bin that a user would 
> call to create, start or connect to a PostgreSQL database.
> 
> I suppose when we remove the 8.2 packages from Nevada, we could add 3 
> scripts to /usr/postgres/8.2/bin in the 8.3 packages that print the 
> above message. And when we EOF 8.3, add similar scripts to 8.4, etc.

Maybe just 1 script, with symlinks :-)

  -John





PSARC 2008/549 Re: Apache Standard C++ Library ARC Case

2008-10-02 Thread John Plocher
Looks like mucho lotso progress has been made.  It looks pretty good,
but I still have a couple of questions:

   o This case obsoletes STLport4 (section 4.1).  How will existing
 users of STLPort4 find out that we did this?  ("#warnings in
 headers...?)

   o This case starts libC on its own path towards Obsolescence (4.3, 4.4)
 Same comments apply as above, but not as urgently.

   o What are the actual dependencies between this project and the studio12
 compilers?

   o Who is signed up to track the Apache stdcxx project and update the
 bits in SFW over time, and how do you expect those bits to evolve?
 (picky interface taxonomy details needed, along with a bigger picture
 of who gets a chair when the music stops - erm, I mean, what happens
 when Apache comes out with incompatible change or does a Major 
 Version 5 or 6 ...)

and clarifications:

>> Stefan Teleman wrote:
>> 3.2.The SFW Consolidation will provide pkg-config [ *.pc ]
>> files for the Apache Standard C++ Library.  These files will encode
>> the correct Sun Studio 12 command-line switches for:...

>> 3.3.The SFW Consolidation will provide a default UNIX man
>> page... will detail the mechanics of ...
>> 3.4.The DevPro/Tools Group will provide integrated command-line
>> support...
>> 3.5. ...  This document does not attempt to address the interfaces
>> to be provided...

How can you create ".pc" files that encode the correct flags without
first knowing what the flags (interfaces) are?  See my dependency 
question above...

>> 4.   Future Directions, and Recommendations for Solaris Developers
...
>> 4.5. The contents of this document do not establish ARC Precedent.

This conflicts with the statements in 4.1 thru 4.4 - of you expect any of them
to be followed, you *need* a precedent.

This is where I would like to see buy-in from the compiler team.

   -John



[sparks-discuss] libldap:ber_printf() 'O' in format string [PSARC/2008/607 FastTrack timeout 10/01/2008]

2008-09-30 Thread John Plocher
Nicolas Williams wrote:
> For some reason said file does not appear on the opensolaris.org page
> for this case.  Can someone tell me how to make it appear there?

I'm working on it.  I think it has to do with the embedded 
quotes in the IAM filename...

> In the meantime I've attached it to this post.

The man page description seems odd and inconsistent.

It says
The format string can contain the following 
characters: -b ... -n ... -o ... O ...

(note the leading hyphens in front of everything
except the new 'O' character)

The problem is that it doesn't.  The example shows

ber_printf( ber, "{sb{v}}", ...)

no "-" signs at all, anywhere.  The man page should instead
present the characters in the list as quoted characters:

 'O' Octet string. A struct berval * is supplied.  +
 An octet string element is output.+

(it is probably out of scope to complain about the misuse of
the printf() design pattern this way - printf uses '%' as
a formatting character escape/identifier and passes thru
everything else unchanged.  ber_printf() takes *every*
character in its format string as a formatting command,
and errors out on unexpected commands.)

  -John





pconsole - parallel console [PSARC/2008/606 FastTrack timeout 09/30/2008]

2008-09-24 Thread John Plocher
Tim Haley wrote:
> John Plocher wrote:
>> achut reddy wrote:
>>> pconsole is intended to replace cconsole, although both will
>>> coexist for a time. 
>>
>>
>> The larger context is useful, thanks!
>>
>> It sounds like this case could easily be re-spec'd to
>>
>> Make cconsole/ctelnet/... Obsolete
>>
> That would mean what exactly?  Don't count on these command being around 
> forever?  All I'm getting at is, does this force any behavior regarding 
> cconsole and friends for the project team at this time?

It means "New projects should not consume this interface".
Combined with the pconsole case, the message is "use pconsole 
instead". (duh :-)

A followon case is needed to do the "remove cconsole etc from
the consolidation" bit.


> 
>> Introduce pconsole as a functional replacement
>>
>> Specify the interface taxonomy details of the new pconsole command
>> (including any "Volatile/Private, keep off the grass" aspects)
>>
> I declared this 'Uncommitted' in keeping with what I've observed of most 
> if not all of the open source ports I've been seeing coming in.  That's 
> only in the man page, I should probably add it to the interfaces section.


OK by me.

  -John



pconsole - parallel console [PSARC/2008/606 FastTrack timeout 09/30/2008]

2008-09-24 Thread John Plocher
achut reddy wrote:
> pconsole is intended to replace cconsole, 
> although both will
> coexist for a time. 


The larger context is useful, thanks!

It sounds like this case could easily be re-spec'd to

Make cconsole/ctelnet/... Obsolete

Introduce pconsole as a functional replacement

Specify the interface taxonomy details of the new pconsole command
(including any "Volatile/Private, keep off the grass" aspects)

  -John



pconsole - parallel console [PSARC/2008/606 FastTrack timeout 09/30/2008]

2008-09-24 Thread John Plocher
As one of the motley crewe* who produced ctelnet and friends for the
SPARCcluster1 in 1993-ish, I had the same questions :-)

cconsole/ctelnet/crsh... don't have the "main window commands"
that pconsole does, but they *did* have an understanding of
a cluster nameservice (files, nis...) that expanded a cluster
name into a list of hosts:

ctelnet clustername

Without looking at the sources, I would guess that there is a 
common ancestor (or at least meme) out there - it all sounds way
to familiar!

  -John

[*] IIRC, Larry McVoy hacked up the original...



Dean Roehrich wrote:
> On Tue, Sep 23, 2008 at 10:45:38PM -0600, Tim Haley wrote:
>> 4. Technical Description
>>
>> pconsole - parallel console.
>> Allows user to create multiple shell console windows;
>> typically one for each node of the cluster.
>>
>> There is an existing open source implementation of pconsole here:
>>
>>  http://www.heiho.net/pconsole/
> 
> What is the relationship between this and the 'ctelnet' that is
> discussed at http://ssc.west.sun.com/Internals/sysadmin_brc.html ?
> 
> They sound identical.
> 
> Dean
> ___
> opensolaris-arc mailing list
> opensolaris-arc at opensolaris.org




Include GNU awk 3.1.5 [PSARC/2008/594 FastTrack timeout 09/26/2008]

2008-09-20 Thread John Plocher
Garrett D'Amore wrote:
> I fully agree with your statements below. 

+2

Thanks,
  -John



Include GNU awk 3.1.5 [PSARC/2008/594 FastTrack timeout 09/26/2008]

2008-09-19 Thread John Plocher
Garrett D'Amore wrote:
> Don Cragun wrote:
>>> Date: Fri, 19 Sep 2008 14:01:25 -0700
>>> From: "Garrett D'Amore" 
>>>
>>> Shouldn't the Human Readable Output really be Not-An-Interface?

In trying to understand the ARC stability classifications, please
try and remember what we are trying to do here.  By labeling
something as "Committed", "Volatile" or "Not-An-Interface", we
are setting customer's and user's expectations about how we might
change those interfaces.

When talking about a filter like awk or tr or even cat, the spec
for that program is pretty clear about the transform applied
by the program.  Input + program = output, and that output is
governed by the filter's spec.  In this case, with gawk, that
transform is probably about as "Committed" as one can get.

On the other hand, there are error messages, help screens and
the like that are intended to be human readable.  Are those things
interfaces?  If not, then they are "Not An Interface".  If so,
(e.g., we expect people to parse the output of gawk --help) then
they are no longer simply "human readable output", but are
instead programming interfaces that may or may not be useful
in a localized world.  TANSTAAFL.

(I am recalling a packaging/installer conversation where someone
is parsing the output of "foo --help" to check for the presence
or absence of a particular command line flag because the consumer
needs to run with both old and new versions...)

So, project team, what things are you trying to identify under
the heading of "Human Readable Output", how do you expect that
output to be used, and what is the proper interface stability
for it?

  -John




PSARC 2008/406 Objective Caml System and LablGTK

2008-09-18 Thread John Plocher
Frank Che wrote:
> This project need to integrate a few static library files (*.a). 

In general, the library policy and the patch/update architecture supported
by the linker require the use of shared objects and not static libs.

Please provide more info as to why you must provide .a files instead of 
following the library best practice/policies.  Without an obvious "duh",
this is probably reason enough for this case to be derailed.

http://www.opensolaris.org/os/community/arc/policies/libraries/

  -John



PSARC 2008/579 virt-convert

2008-09-16 Thread John Plocher
John Levon wrote:
>> To import a VMware VM, you would run virt-convert which sets up some 
>> things and then calls vdiskadm to import the file into xVM.
> 
> virt-convert does the vdiskadm for you. You would use virsh define
> --relative-path=... or something similar to import the actual VM


Just for xVM/Hypervisor or for xVM/VirtualBox as well?

  -John



Integrate ngrep into Solaris [PSARC/2008/562 FastTrack timeout 09/11/2008]

2008-09-10 Thread John Plocher
Garrett D'Amore wrote:
> I doubt anyone cares enough to reopen the case.  The details of this 
> issue (particularly whether the rbac is delivered as a separate root 
> package or as part of a stock ON package) IMO fall below the threshold 
> of ARC review.

Specifically, it is up to Brian and his C-Team to decide what path to
follow here; the ARC was OK with all the alternatives that were presented
and was willing to let the project team make the final decision.

   -John




PSARC/2008/549 - Apache Standard C++ Library

2008-09-06 Thread John Plocher
Stefan Teleman wrote:
> Steve Clamage and myself had reached an agreement on this, a couple of days 
> ago:
> 
> 1. We [ SFW/KDE ] were going to introduce this library in Nevada/OpenSolaris 
> -- 
> it's the easiest integration, and it's the one we care most about, because it 
> can be done *now*.
> 2. They [ DevPro/Tools ] were going to introduce this library with the 
> compiler(s), for Solaris 9/10, at some point in the future. I cannot say when 
> that will be.
> 
> Is this agreement still valid ? 

I hope so.  Even if this case is derailed, derailing does not
invalidate anything.

I would like to see in one place (i.e., NEED SPEC) the expected
transition/migration path (including who does what, where it will
live, and what names it will be known by) for the whole journey
from today (Sun shipping old/other stuff) to tomorrow (what this
case delivers) ending up at the day after (when the compiler team
delivers it).

This is so we can validate the deployed app experience - will
things actually work while we do the transition?

Most of this has been mentioned in this email thread, so I
do not think I'm asking for anything difficult.

   -John




PSARC/2008/549 - Apache Standard C++ Library

2008-09-06 Thread John Plocher
Garrett D'Amore wrote:
> Will the implementation take care to preserve the *existing* functions 
> so that the above inline (which will now have been compiled into various 
> applications) will continue to function, regardless of what other 
> changes may be necessary for the class?

The materials Stefan has submitted state that the Apache C++StdLib project
has made the commitment to not muck up these types of implementation details
in any but one of their major releases.

That is, even though the incompatibly evolving C++ language definitions and
associated rickety scaffolding may allow one to shoot themselves in the foot,
and their code could be evolved in ways that would expose such runtime
binary incompatibilities, they explicitly promise not to do so within a
certain set of release naming boundaries.

What more can the ARC ask?  This is exactly what we do with goode olde libc.
The only difference is that the C++ world allows for (and oddly, seems rather
comfortable with) incompatible future versions.

At that point, the Apache Standard C++ Library would need to rev its HNAME
and .so versioning info, but again, this is exactly what we did with libc.

My only disconnect here was our left and right hands not talking to each other,
and now that Stefan and Steve are exchanging email, I'm not even very worried
about that - though I'm sure someone will let me know if/when I should start
worrying again :-)

   -John





PSARC/2008/549 - Apache Standard C++ Library

2008-09-05 Thread John Plocher
Garrett D'Amore wrote (slightly more verbosely and parenthetically):
> C++ programs linked against this library are incompatible
> with libC that ships with Solaris today.

Where does this claim come from?  Are you confusing libstdcxx
with libC?

The proposal only says:

  The Apache/RogueWave Standard C++ Library is not binary compatible
  with the Sun Standard C++ Library [ libCstd.so.1 ], or with the
  STLport Standard C++ Library.

Nowhere does it mention the C++ Runtime support library, libC.

-John




PSARC/2008/549 - Apache Standard C++ Library

2008-09-05 Thread John Plocher
Garrett D'Amore wrote:
> If KDE has a way to shield applications underneath from such a binary 
> breakage, then its a different story altogether.  

It easily can do that.  The key is to realize that "binary compatibility"
is not a forever thing, or a global thing, but lasts only until a Major
release of a consolidation is released as part of a larger product.

If KDE were to make this C++ library a standard part of its consolidation,
then KDE could promise to never change it incompatibly unless KDE itself
were to produce a Major release of KDE.

In the same way as we would not expect or require absolute binary
compatibility between KDE.old clients and KDE.major.new servers,
we would not require absolute binary compatibility from its C++
library.

> To put this in comparison, imagine if almost all of the standard C 
> functions were simply *macros* rather than functions, and the macros 
> made references to volatile innards of the C library. 

If we did this, our promise would have to be "binary compatibility
guarenteed only until we changed things incompatibly", at which point
we would have to up the major version number of ON (and thus SunOS
and Solaris because of the transitive law of maximal inconvenience...)

For stuff like libc and core ON, absolute compatibility over eons is
a no brainer; elsewhere it may easily have less (or even no) value.

All this proves is that binary compatibility isn't a global constant.

-John



PSARC/2008/549 - Apache Standard C++ Library

2008-09-05 Thread John Plocher
Garrett D'Amore wrote:
> One of the implications of such a binding (Volatile), is that projects 
> which build other C++ shared libraries upon this one cannot have a 
> commitment level higher than Volatile either.

Braap.  Bad Architecture Alert.  The whole reason we provide abstractions
like consolidations and components is precisely so that we can provide
"higher than Volatile" expectations for things that theselves may exhibit
"less than Volatile" stability.

There is no reason this couldn't be made a part of the KDE consolidation,
and maintained by them as Committed interfaces for use by any KDE consumers
who need it.  Volatile means "can change", not "will change", and both the
Apache C++ Lib and the KDE projects certainly seem to meet the basic ARC
expectations of managing the compatible evolution of their component.

If the C++ basis that KDE builds upon were to change incompatibly, I'd
expect KDE to react by producing a major release - again, just like the
ARC would expect.

Nothing here requires KDE to be Volatile.

   -John



PSARC/2008/549 - Apache Standard C++ Library

2008-09-05 Thread John Plocher
John Fischer wrote:
> 1. modify the case to address the concerns
> 2. schedule a meeting with additional materials addressing
>the concerns

2a. schedule an information-only meeting to discuss the case AS IT
IS TODAY and its implications, with the intent of having a followup
meeting where a revised/expanded spec is provided.

IMO, it is critical that SteveC from the compiler team be at whatever
PSARC meeting is held.

> 3. withdraw the case

This last, IMO, would be a very bad choice.

   -John



PSARC/2008/549 - Apache Standard C++ Library

2008-09-04 Thread John Plocher
Garrett D'Amore wrote:
> I don't see Gnu C++ mentioned explicitly,

It mentions C++ ABI - something that Studio has and g++ does not.
 From an ARC perspective, that is all that really matters.

Note the dates on the document - 1993-ish.  One reason it is not
on OS.o is that it needs updating, which is not a trivial job.

> Right now, it doesn't seem like any of our binary compatibility 
> guarantees apply to any dynamically linked C++ code,

Rather, dynamically linked C++ libraries that are NOT ABI Compliant.
Studio C++ does generate ABI Compliant code, g++ does not.

> and this case (as 
> proposed) proposes to set new precedent here.

What would that be?

   -John



PSARC/2008/549 - Apache Standard C++ Library

2008-09-04 Thread John Plocher
Garrett D'Amore wrote:
> John Plocher wrote:
>> Nicolas Williams wrote:
>>> It would be
>>> nice if we could avoid revisiting this every time a project comes along
>>> to integrate some C++ library.
>>
>> We already have a stake in this sandbox:
>>
>>Don't use g++ to build/deliver C++ libraries on *Solaris,
>>period.  Use Studio's C++ instead.
> 
> Oh, cool!  Do you happen to know offhand what the case number for that 
> opinion would be?

LSARC 1993/550 C++ ABI
LSARC 1992/026 tools.h++
PSARC 1993/071 Bundling libC.so with Solaris

BestPractices/ToDo/20q.C++Guidelines.html
has a dated but still valid take on this topic.
(Sorry, not out on OS.o, included here for now...)

-

Software -> C++ Guidelines

-


 Background


   ARC cases for C++
   LSARC 1993/550 C++ ABI


 This case has a specification and an opinion, but the opinion is not yet 
finalized (at
 this writing). The C++ Object Binary Interface (OBI) has been split out of 
this case.


   LSARC 1992/026 tools.h++
   PSARC 1993/071 Bundling libC.so with Solaris


-


   Advice


LSARC Guidelines for Products Using C++ Language




  Revision:1.0.3 of July 22th, 1994
    
    
  Drafted by:  Dean Stanton, Evan Adams, Don Woods
  HTML conversion: John Plocher




   1.0 Problem Description


  LSARC found it necessary to provide guidance to Sun software developers 
who are
  considering using the C++ language. Indeed, many Sun groups are already 
developing in
  C++, and encountering compatibility problems.


  C++ has become a widely-used implementation and interface specification 
vehicle. And
  yet the C++ language lacks a stable ABI (Application Binary Interface) 
standard; no two
  compiler releases (from SunPro or from other vendors) necessarily have 
compatible
  binary forms for binary layout of objects or function calling sequences. 
Hence, the
  calling sequences generated by one compiler are not guaranteed to be 
compatible with
  those generated by another brand of compiler or another major release of 
compiler from
  the same provider. And an interface implemented by one compiler may not 
be usable (or
  may not function correctly) when invoked by compatible source code 
compiled by another
  compiler.


  An additional binary-compatibility problem plagues software libraries 
(whether
  statically linked, dynamically linked, or dlopen-ed): binary object 
layout is affected
  not only by the specified description of the object, but also by the 
private data
  included in the class (in the header file used at compile time). A 
library interface
  specification should be independent of its implementation, so that a 
revised
  implementation may be substituted (say, in a later library software 
release
  asynchronous to the application), and still function correctly. Most 
developers address
  this now by wrapping an interface-only class around the implementation 
class. This
  problem may eventually be solved by the C++ Object Binary Interface (OBI)
  project[Reference 1]. Alan Sloane ca

PSARC/2008/549 - Apache Standard C++ Library

2008-09-04 Thread John Plocher
Nicolas Williams wrote:
> It would be
> nice if we could avoid revisiting this every time a project comes along
> to integrate some C++ library.

We already have a stake in this sandbox:

Don't use g++ to build/deliver C++ libraries on *Solaris,
period.  Use Studio's C++ instead.

We made that choice for all the reasons discussed here, and
(IMO) this is not the time or place to revisit it.

   -John




PSARC/2008/549 - Apache Standard C++ Library

2008-09-03 Thread John Plocher
Garrett D'Amore wrote:
> I thought John officially derailed it.
> 
> At this point, if John doesn't derail it, then I will -- if only so that 
> we can offer some much needed advice.


My afternoon has been pretty hectic, and I am just now
following up on the dozen or so things I should have
gotten done earlier...

After some more thought, derailing the case is premature;
stopping the clock and keeping it from auto-approving before
the compiler team's perspective is injected into the
conversation is what is important.

So, the case is in "waiting need spec" state, waiting for
Steve and Stefan (and whomever else feels the spirit moving
in them) to mind-meld...

I would like to hear back from Stefan and Steve on their
discussion; in particular, if they still have a large
disconnect.  To Bart's comment, after the mind meld, I
would like Steve to come up with a more ARC-actionable
list of concerns if he still feels that this case should
derail into a full review.

   -John





PSARC/2008/549 - Apache Standard C++ Library

2008-09-03 Thread John Plocher
John Plocher wrote on the 26th of September:
> I'm really disappointed that the compiler team hasn't jumped in to this
> discussion.  Have they been contacted and invited?

I just got the following from Steve Clamage, one of the lead engineers
on the compiler team:

> I was only today made aware of this PSARC case, and I would like to
> derail the fast-track if it is not too late. I knew the topic was
> being discussed, and contributed negative comments to those who
> mentioned it to me. In short, I think the proposal is counter-productive,
> makes false statements of fact, and will not (and cannot) have the
> good effects it claims.

Please consider this fasttrack case derailed.  The next step is for Stefan
and Steve to get together [somewhere not on the PSARC aliases] and discuss
these issues, and to then come back with their resolutions and/or an updated
proposal.

   -John





  1   2   3   4   >