On Monday 11 December 2006 12:53 pm, Sarah Jelinek wrote: > Certainly, software is software, in the end, but it comes in many forms > which require different installation techniques. The Caiman installer, > as with the current Solaris installer, doesn't really do software > configuration.
IMO, the packages should do minimal configuration so that the user doesn't need to jump through a bunch of hoops to get things running. For instance, there could be scripts that are run during the package install that would make it easier to get a database up and going. Oracle always had a test ID installed that was user: Scott, pw: Tiger. Many folks at Oracle are not sure what that is, but it's been that way for a long time. We don't have postgres configured to be usable, nor do with mysql, without getting the basics setup. Packages should be responsible for doing a better job at this type of work. This is not something that the installer can do in itself, the packages need to change also. > We are planning the ITU and device verification service for Caiman. We > currently have ITU available, early on in the install process, but it is > a manual process for the users to choose and specify the locations of > the drivers. We don't have device/driver verification, which is a key > component for the users to understand what is needed to get their > systems up and running and installable. We plan to use well known > repositories for ITU's and other software so that users can get what > they need over the net. This is good. Solaris is quite behind in the sense that it doesn't do much today in regards to automated install of packages over the net. It does some minor things, but it doesn't do a full install by pulling the packages over the net, and I'm sure you'll be changing that. > We agree that the patch/packaging stuff is critical. But, with limited > cycles we have chosen to tackle the installation problems first. The > Caiman installer is being architected and designed in such a way as to > be package/patch agnostic. We realize that we shouldn't have the > installer intimately aware of the software format if at all possible. Yes. All I was trying to point out is that it needs to be planned into the architecture, so that in the future it can expand. I would expect to mix/match packages from multiple sources, and to have multiple repositories to pull from. > You are right, it is all software, but currently bundled in much > different ways. We can't just make it all pkgs, or patches, or tar balls > without some thought as to the ramifications. For example, our customers > want patches because they can 1) control what they install on their > systems 2) patch a specific binary for a specific bug 3) manage the > amount of change on their systems at any point in time, and other > reasons I am sure. I am not saying we shouldn't consider just using some > sort of packaging mechanism for all software, but many customers like > the fined grained control patches give them. Many customers who must > certify their apps after applying updates want to have less uncertainty > in the process of applying updates to software. This is all good, and how it is done today, but this doesn't seem to scale very well. Ultimately, this fragments the packages from the install from the updates, from the patches. Ultimately I think this could hurt, as is being witnessed. I don't know there is a better way, just something to consider. > There are a lot of choices for us, and simply put, the time it will take > to decide the best business as well as technical strategy for this > problem is too far out for us to hold the progress on Caiman. Understood. > > Push vs. Pull > > If I get the intent of this discussion, I believe you are discussing > push vs pull of software for a customers system. In general I believe in > 'pull'. In general, in Caiman, we expect 'pull'. Certainly in the > interactive installer environment. For example, if we find that drivers > are missing and can be found in a repository somewhere we will ask the > user for confirmation before downloading the driver. With jumpstart we > will do this automatically(I believe that is the current plan), so this > will be push. Those using jumpstart will have to be aware that pushing > drivers is a possibility. Right, but I was only thinking about where the program is running, and how the data is moving. Wether a server pushes a package to another server, or a server pulls it to their system from another, are two different concepts. Let's say you have a farm of servers where there are 60 servers to maintain. The customers does their testing, and then wants to get those changes on their 60 servers. It would be easier to be able to push that out in a batch, than it would to pull from each server specific. > Internally in a customers WAN or LAN, we can provide tools that will > enable them to push software out to multiple machines if they choose to > to this. We are planning on providing replication and migration > utilities that will enable just what you are referring to regarding > sysadmins managing their systems. Yes, those would provide exactly what I'm talking about here. > You are right, we don't have anything about security in our current > docs. Certainly, the software repository service will have to understand > something about security. We just haven't gotten that far. I knew that, but just wanted to comment as I didn't see anything. First and foremost it needs to be secure so that packages, patches, updates, etc...can all be installed securely. If not in the first release, it should be considered at some point to have properly signed packages so that it is known that they are proper. Having a security server in place that can be queried is soemthing that will be expected in the future. > I would like more details about what you mean by Security Network. Can > you describe the requirements and functionality you expect this to > provide? Do you envision this as a separate service in Caiman? I was thinking about a way you could get a package/patch/update from multiple servers, but have a way that you can ensure the package is the right one that hasn't been tampered and hacked. No matter where you get the package from, having a way to query a security server so you know the package is good, would help tremendously. Solaris, as it is today has a way to check if the binaries are proper for the release being run, so the technology is there on the system, but not implemented very well. > > Dynamic Repository > > ------------------ > > For me, sources are sources, wether a package comes from xyz server, my > > local hard disk, a CD/DVD, nfs mount, or *preferably* over the network, > > all of these packages should be available as a whole. > > Not sure what this statement means. Which packages should be available > as a whole? A whole of packages, if that makes sense. Let's say we have 2000 packages to install, and 1000 are local, 500 are on another server, and 500 on another. Being able to know which packages are where to be able to determine where to get them from and/or which packages are stored in that repository could help to automate things. I'm not exactly sure it would be implemented to handle multiple server, but again, to me a package is a package is a package. > Our plan is to have repositories on opensolaris for software that users > may want to install on their systems. Certainly, Sun is a customer of > opensolaris and this software will be available to anyone who wants to > download it. The difference with Caiman is that we will enable the users > to utilize this feature during install/upgrade or even after so that > they have easy access to the repositories we consider trusted. This sounds good, thanks for the clarification. > > Simplification > > -------------- > > Some of the flow diagrams are complex to look at, 1.1.1 as an example > > could be simplified and/or arranged to remove some of that complexity, at > > least to me. > > I can see your point about 1.1.1. But, honestly, this was as simple as > we thought we could show the dependency chain. Our first pass at this > was all spaghetti :-). But, I will take a look at this and see if there > is a way to simplify it. If you have suggestions on how this might be > done, please do let us know. I have a couple ideas, but not looking at the doc right now. > We did group some of the services in to the phases you mention later on, > in section 2.1, in an effort to reduce the complexity. I'll dig and look at that later. I realize this is work in progress, so please understand I'm not critisizing what you've done as much as suggesting some ideas. > Let me see what I can do with these. I agree, they are complex and > difficult to follow. To a certain extent the detail is for us, so we > clearly understand the flow and dependencies when working on the > architecture and design. I think for management also, and I don't know how Sun management looks at the diagrams, but my experience is that managers are not good at comprehending complexity, or tend to think it's going to take years to complete. While not always the case, this is how it typically plays out. > In some cases the GUI should be shown on top, but in some cases it > shouldn't be shown at all(like the post install phase) since it isn't a > player in the function being performed. Good point! > I will work on the diagrams. I am doing more flow analysis anyway as > part of the design work for the Orchestrator so hopefully I can simplify > these as I dig down deeper in to this. I would like to say they look pretty good. If simplified up just a tad, they're good to go. > the /var/sadm/install/contents file is one of the reasons it takes time > to do installations. And, it is a single point of failure for sure. As > to the solution, there have been a few bandied about. One I thought had > some promise was doors as files: > http://approach.sfbay/wiki/index.php/Doors-As-Files I haven't looked at this, I will. > We have just started the discussion about how the services in Caiman > would be instantiated. That is, should they be separate processes, or a > multi-threaded process, or a combination of some that are started as > separate processes and some that are part of a bigger process which is > multi-threaded to handle the various functionality. Part of what I am > doing now is trying to determine the orchestrator's design, how it would > communicate with the other services, which ones could run standalone or > that are driven by other services than the orchestrator. Ok, not to get the cart before the horse so to speak. It should like you're thinking about some of the same ideas. I was just trying to put together how the different services talked to each other, were they daemons that get started and talked through IPC to complete tasks, or are they call and run until they end, and then all resources free'd. > Thanks for all your comments. I appreciate your review of the > architecture document. I might think of other stuff when I look at it again, but that was just off the top of my head. It is a decent docuemnt, IMO, so hope you don't take any of the comments personally. I don't think you are, I just want to be overly cautious about making comments to someone's design architecture. -- Alan DuBoff - Solaris x86 Engineering - IHV/OEM Group Advocate of Insourcing at Sun, hire people that care about our company!
