> http://hub.opensolaris.org/bin/view/Project+caiman/Logging

Hi Ginnie,

Having had to use the Python logging framework extensively in order to 
debug the text installer, I have some comments and questions about this 
design doc. Overall it looks like a good start, but could use some 
clarification.

Section 3, 4th bullet says the framework must "enable components of the 
installer to log data without the overhead of output" - can you clarify 
this statement? I'm not quite understanding it.

Section 4: "Log Levels" seem to be a specific type of "Event Filter" - 
it may be worth calling that out or bearing that in mind during design 
and implementation.

Section 5:
"Remote logging is enabled by default." This seems like an odd 
requirement to me. In particular, it seems like, except for AI, this is 
the opposite of what's desired.

The 4th bullet seems to imply that one can take an existing log and 
"re-log" it to a different stream. Is that the intent, or would it be 
more accurate to say that every logger will receive every logging 
statement (though not all statements will reach each logger's log, based 
on filters)?

6th bullet: The required inputs here seem vague. What comprises the 
"logging content"? What comprises "other data"?

8th bullet: Does this mean that the logging levels can be changed after 
beginning an installation? Or can the logging level only be changed at 
startup time? If it's the latter, that seems that the actual requirement 
is that each application using the logger must provide a parameter (via 
manifest or CLI) to set the log level at startup.

9th bullet: What is the benefit of exposing this level of 
exclusion/inclusion to the user? My understanding of CUD made it seem 
like the individual components might be too specific or low-level for 
the user to worry about, in general.

Section 6:

General: Should the differences in the C and Python APIs be called out here?

6.1:
I understand the general intent is to mimic or build upon the python 
standard logging module. In that case, the more appropriate approach 
would be to have different sub-classes for different types of logging 
(such as FileLogger, HTTPLogger, etc.) instead of passing in the type of 
logging to be done.

There's an IP address parameter, but not a "target" parameter (such as a 
file descriptor).

6.2, 6.3: Can you explain the benefit/use of these functions? I don't 
quite understand their purpose.

6.4:

"Output: Success or failure" - this should only be the case for C code. 
Python code should raise exceptions.
Actually, I'm curious what the benefit of providing a failure case would 
be? Would there be a need to stop the install process entirely if a 
logger begins to fail?

Python's logging module also provides an additional logging function: 
"logging.log(level, message)", where level is an integer. The higher 
integers correspond with higher logging levels (for example, 
logging.info is the equivalent of logging.log(20, <message>) ). This 
allows flexibility in that intermediate log levels can be defined - a 
case I've seen, for example, is to supply an "xdebug" level (lower than 
debug), that gets invoked in code that is run frequently and would 
normally clog the log with repetitive messages. Is there any desire, or 
need, to supply such functionality to our framework?

I mentioned to Sarah when reviewing the CUD document that the concept of 
a "progress reporter" is nearly a special instance of a logger. Have you 
considered expanding the interface here from 
"logging.<level>(<message>)" to "logging.<level>(<message>, <completion 
%>, ...)"? (Anything other than the message could be defined as optional 
parameters). Such an interface would allow for a special logger class 
that simply monitors incoming statements for the completion %, and 
updates the GUI/text-install progress bar, for example.

Section 6 (again), general:

I didn't see any mention of "Event Filters" in the API. Are there plans 
to provide filters initially, beyond the "log level" filter?

Section 7: If the network is temporarily unavailable, will messages get 
lost, or cached until they can be delivered?

Final thoughts:

I realize (after I typed up all these comments) that, in my mind, I've 
blurred together the concept of a "logging framework" (single instance 
that manages all logging) with a "logging instance" (the concept of a 
single output file/stream with associated filter(s) based on message 
level), wherein each message logged to the framework is passed to each 
logging instance for "shipping and handling" (filtering and sending to 
the destination). The API isn't fully clear on which functions apply to 
the framework vs. a logging instance, and so on. Can this be clarified 
more explicitly?

Thanks,
Keith

Reply via email to