Dave Miner wrote:
> Karen Tung wrote:
>> Dave Miner wrote:
>>> Karen Tung wrote:
>>>> This code review fixed 2 things:
>>>>
>>>> 1) After the removal of the previous slimcd_post_processing.tar and 
>>>> moving
>>>> code around.  The mkrepo script can no longer read the 
>>>> generic_live.xml file
>>>> because it can't find it in "./".  The full path to the 
>>>> generic_live.xml file
>>>> need to be passed into mkrepo.
>>>>
>>>> 2) When some of the gnome postinstall scripts fail, all the output are
>>>> re-directed to /dev/null.  This makes debugging the failures 
>>>> difficult.
>>>> Change it to redirect output from postinstall scripts to a file.  
>>>> If the
>>>> script failed, content of that file will be displayed.
>>>>
>>> Generally I'd suggest the temporary output redirection go to 
>>> something like /tmp/postrun.output.$$ so you're not dependent on 
>>> write access to the current directory.  Also, sending the error 
>>> output you're cat'ing at 19 in exec_postrun to stderr would allow 
>>> for whatever's calling it to more easily discern it as error output.
>>>
>>> Dave
>> Hi Dave,
>>
>> Thanks for the code review.  In general, I would agree with the 
>> opinion that the outputs should go to /tmp. I can still do that in 
>> this case, but we really don't need to worry about write access in 
>> this case
>> since the directory in which the scripts are run is created by the 
>> Distro Constructor, so, we are
>> guaranteed to have write access.
>>
>
> You're assuming here that nothing changes around you to make that 
> untrue.  That's the sort of thing that makes code fragile, so I'd 
> rather you used /tmp.
>
That would be the /tmp of the proto area, not the /tmp of the build 
machine.  Since the Distro Constructor
created the whole proto area, I figured that there's really no reason 
why it won't have write access.
But I can change it to /tmp.

>> The output I am "cating" is not error output.  Those are outputs from 
>> running post_install scripts.
>> Regardless success or failure of those scripts, I found that tons of 
>> outputs are produced.  In the
>> case where running the script is successful, I don't really see a 
>> point of displaying the output
>> of the script.  In the case that the script failed, I thought 
>> displaying the output will be able to
>> help people with debugging the failure.
>>
>
> The fact that the script failed indicates it's error output as far as 
> I'm concerned.
>
> Dave
OK, so, you meant that I should just redirect stdout to NULL, and leave 
stderr alone instead of
piping it all to a file, and then, displaying the file?  Like this?

....
while [ $num -le $num_scripts ] ; do

        exec_name=`echo *.$num`
        echo "Executing $exec_name"
        /bin/sh $exec_name > /dev/null
        if [ "$?" != "0" ] ; then
                echo "$exec_name failed"
        fi
        num=`/bin/expr $num + 1`
done
.....

The only concern I have about doing it this way is that some of the 
stdout might
contain useful info about why the script failed.

--Karen



Reply via email to