Hi Ginnie, Thanks for the clarifications. I have some follow-up below.
- Keith On 02/24/10 05:02 PM, Virginia Wray wrote: > Hi Keith - > > Thanks for the review. My response is inline. > ginnie > > > On 02/12/10 16:10, Keith Mitchell wrote: >>> http://hub.opensolaris.org/bin/view/Project+caiman/Logging >> >> Hi Ginnie, >> >> Having had to use the Python logging framework extensively in order >> to debug the text installer, I have some comments and questions about >> this design doc. Overall it looks like a good start, but could use >> some clarification. >> >> Section 3, 4th bullet says the framework must "enable components of >> the installer to log data without the overhead of output" - can you >> clarify this statement? I'm not quite understanding it. > This is a little obtuse, you're right. What I meant by this was the > logging framework should hide the complexity of logging to different > formats or logging different levels of output to different locations > from the components. Does that clarify? That makes more sense. The logging service needs to present a straightforward API to consumers, enabling them to send messages to one location instead of manually managing print statements, writing to files, sending over the network, etc. >> >> Section 4: "Log Levels" seem to be a specific type of "Event Filter" >> - it may be worth calling that out or bearing that in mind during >> design and implementation. > Help me understand what you are thinking....My understanding of an > event filter is that it receives another object?s events before they > reach the intended target. My thought is that the logger determines > which level of output goes to which location, so it would be the event > handler. I think this might relate to my confusion on terminology - see my response to the final comment. > >> >> Section 5: >> "Remote logging is enabled by default." This seems like an odd >> requirement to me. In particular, it seems like, except for AI, this >> is the opposite of what's desired. > I'm going to remove that. That does seem unreasonable. The application > can determine whether or not to enable remote logging. >> >> The 4th bullet seems to imply that one can take an existing log and >> "re-log" it to a different stream. Is that the intent, or would it be >> more accurate to say that every logger will receive every logging >> statement (though not all statements will reach each logger's log, >> based on filters)? > Just to minimize confusion, the application will only have one > instance of a logger. I wasn't sure what you meant by "every logger." > The logger will receive all logging statements sent by the > application, but different > outputs may receive different statements. Another terminology issue - see below. >> >> 6th bullet: The required inputs here seem vague. What comprises the >> "logging content"? What comprises "other data"? > We're talking about what will be required in a log. For instance time > stamp, date, etc.. . It is a bit open ended here though, because > different applications might want to include different output or > formatting. If the input isn't required for all applications, is it required? Mostly semantics I suppose, but anything that's required should be listed explicitly I think, and other, application-specific/optional data could be listed separately. >> >> 8th bullet: Does this mean that the logging levels can be changed >> after beginning an installation? Or can the logging level only be >> changed at startup time? If it's the latter, that seems that the >> actual requirement is that each application using the logger must >> provide a parameter (via manifest or CLI) to set the log level at >> startup. > Thanks. It is the latter. I'll clarify that. >> >> 9th bullet: What is the benefit of exposing this level of >> exclusion/inclusion to the user? My understanding of CUD made it seem >> like the individual components might be too specific or low-level for >> the user to worry about, in general. > This was actually a misunderstanding on my part. I'm removing that > bullet. > >> >> Section 6: >> >> General: Should the differences in the C and Python APIs be called >> out here? > Yes. I'll clarify and provide more detail for these interfaces. >> >> 6.1: >> I understand the general intent is to mimic or build upon the python >> standard logging module. In that case, the more appropriate approach >> would be to have different sub-classes for different types of logging >> (such as FileLogger, HTTPLogger, etc.) instead of passing in the type >> of logging to be done. > As we flesh out the design, I'll most likely be modifying these. >> >> There's an IP address parameter, but not a "target" parameter (such >> as a file descriptor). > Yes. I think that's a good point. I'll look at this and clarify it > further. >> >> 6.2, 6.3: Can you explain the benefit/use of these functions? I don't >> quite understand their purpose. > Register_Component is used with the logging facility to register the > component of an application with the logger. > Find_Component is would be used to locate the components of an > application to modify the logging level of a particular component > from the command line. > > I think we're going to revisit these function calls, so I'll update > them in the revision. Ok. I'll take a look at the updated revision when available. > >> >> 6.4: >> >> "Output: Success or failure" - this should only be the case for C >> code. Python code should raise exceptions. > thanks. >> Actually, I'm curious what the benefit of providing a failure case >> would be? Would there be a need to stop the install process entirely >> if a logger begins to fail? > Good point. I'll think about that for the revision. >> >> Python's logging module also provides an additional logging function: >> "logging.log(level, message)", where level is an integer. The higher >> integers correspond with higher logging levels (for example, >> logging.info is the equivalent of logging.log(20, <message>) ). This >> allows flexibility in that intermediate log levels can be defined - a >> case I've seen, for example, is to supply an "xdebug" level (lower >> than debug), that gets invoked in code that is run frequently and >> would normally clog the log with repetitive messages. Is there any >> desire, or need, to supply such functionality to our framework? > Let me think about that. It depends on the needs of the applications > that will be consumers of the logging framework. Ok. I think either generic log levels, or at least an "xdebug" type level available, would be beneficial. >> >> I mentioned to Sarah when reviewing the CUD document that the concept >> of a "progress reporter" is nearly a special instance of a logger. >> Have you considered expanding the interface here from >> "logging.<level>(<message>)" to "logging.<level>(<message>, >> <completion %>, ...)"? (Anything other than the message could be >> defined as optional parameters). Such an interface would allow for a >> special logger class that simply monitors incoming statements for the >> completion %, and updates the GUI/text-install progress bar, for >> example. > Yes. I'm looking at that. That's a good suggestion. >> >> Section 6 (again), general: >> >> I didn't see any mention of "Event Filters" in the API. Are there >> plans to provide filters initially, beyond the "log level" filter? > What are you envisioning? That would help me respond. I wasn't envisioning anything in particular at this point - just trying to get a grasp for the concept of "Event Filter" (see below, on terminology) >> >> Section 7: If the network is temporarily unavailable, will messages >> get lost, or cached until they can be delivered? > There will still be log files created locally. When the network is available again, will the network destination automatically have access to the messages logged during the downtime, or will the user have to manually access those files? If the network destination picks up the missed messages automatically, can you describe briefly the flow here? >> >> >> Final thoughts: >> >> I realize (after I typed up all these comments) that, in my mind, >> I've blurred together the concept of a "logging framework" (single >> instance that manages all logging) with a "logging instance" (the >> concept of a single output file/stream with associated filter(s) >> based on message level), wherein each message logged to the framework >> is passed to each logging instance for "shipping and handling" >> (filtering and sending to the destination). The API isn't fully clear >> on which functions apply to the framework vs. a logging instance, and >> so on. Can this be clarified more explicitly? > Yes, I'll clarify that....and just to clarify here, the logging > framework comprises the logging instance as well as all of the > handling for the application. So as mentioned above in a few places, I think I'm still confused on the terminology and concepts in play here. Let me see if I can update based on your responses, and I'll let you correct my mistakes. If I'm *way* off, would you mind contacting me privately and scheduling a quick discussion so I can avoid confusing myself further? logging instance - A singleton instance that manages all incoming requests to log data. event filter - The logging instance will pass log data to one or more event filters. Each event filter will examine the data it gets, determine if it meets certain criteria (logged at the appropriate level), and send it to a specific destination (file, remote network location, console output) logging framework - the logging instance, plus any additional components required for interacting with the application (setup,, cleanup, etc.) >> >> Thanks, >> Keith >
