Linux-Development-Sys Digest #680, Volume #8 Mon, 30 Apr 01 10:13:13 EDT
Contents:
Re: scripting questions (Kasper Dupont)
Re: Use of RPATH when creating shared libraries (under Linux) (Michael Kerrisk)
md5sum'ing block devices (Loris Caren)
Re: About the syscall wait4 (Michael Kerrisk)
Re: How to write to file in Linux Kernel (Kasper Dupont)
Re: mount (Kasper Dupont)
Re: Pipe Problem (Kasper Dupont)
Re: what is the means of andl in at&t Asm?? (Kasper Dupont)
Re: md5sum'ing block devices ("Nick Lockyer")
Re: do_irq and do_softirq (Kasper Dupont)
Re: How to write to a file in Linux Kernel (Kasper Dupont)
Re: troubleshooting a kernel mod ("Nick Lockyer")
Re: md5sum'ing block devices (Kasper Dupont)
----------------------------------------------------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: scripting questions
Date: Mon, 30 Apr 2001 10:57:03 +0000
PPP wrote:
>
> I am trying to get proficient in scripting. I have a few questions.
>
> $SETSID $PPPD pty "$PPPOE_CMD" \
> $PPP_STD_OPTIONS \
> $DEMAND \
> $PPPD_SYNC &
> echo "$!" > $PPPD_PIDFILE
> fi
> wait
>
> Now when a program returns a value the calling script usually uses $? to
> represent the returned value. What does the term $! represent (why was $?
> not used)?
>
> What does the wait command do? Would not the script wait for $PPPD to return
> anyway? If this is the case then why is wait used?
The & character puts the command in the background.
The wait command waits for the background command
to terminate.
--
Kasper Dupont
------------------------------
From: [EMAIL PROTECTED] (Michael Kerrisk)
Crossposted-To: comp.unix.programmer
Subject: Re: Use of RPATH when creating shared libraries (under Linux)
Date: Mon, 30 Apr 2001 11:51:09 GMT
Thanks, I was beginning to suspect what you've described (the Linux
linker not honouring embedded RPATHs) - good to have confirmation
though.
Cheers
Michael
On Fri, 27 Apr 2001 11:58:08 +0200, Georg Brein KIS
<[EMAIL PROTECTED]> wrote:
>Hi,
>
>> $ cc -fPIC -g -c -Wall modx1.c modx2.c
>> $ gcc -shared -o d2/libx2.so modx2.o
>> $ gcc -shared -o d1/libx1.so modx1.o -Wl,-rpath,$PWD/d2 -Ld2 -lx2
>>
>> Using ldd and objdump, I can see that the correct dependency and RPATH
>> info
>> is stored in libx1.so:
>>
>> $ ldd d1/libx1.so
>> libx2.so => /home/mtk/sotest/d2/libx2.so (0x40003000)
>> libc.so.6 => /lib/libc.so.6 (0x40009000)
>> /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x80000000)
>>
>> $ objdump --all-headers d1/libx1.so | grep RPATH
>> RPATH /home/mtk/sotest/d2
>>
>> When I try compiling my main program, I get an error,
>> and no executable:
>>
>> $ cc -Wall -o main main.c -Wl,-rpath,$PWD/d1 -L./d1 -lx1
>> /usr/i486-suse-linux/bin/ld: warning: libx2.so, needed by
>> /d1/libx1.so, not found (try using --rpath)
>> ./d1/libx1.so: undefined reference to `x2'
>> collect2: ld returned 1 exit status
>> $ ldd main
>> ldd: ./main: No such file or directory
>
>in the final linker step creating the executable you have to
>make use of the -rpath-link specifying all referenced shared
>library directories (including directories containing indirectly
>referenced shared libraries), if they aren't already specified
>in an -rpath or -L option (or aren't situated in one of the
>default library directories, which under Linux include at
>least /lib and /usr/lib).
>
>The Linux linker doesn't honour RPATH-specifications embedded
>in shared objects (contrary to the run time linker), because
>at compile time the referenced libraries might not be at the
>final location (or worse: an incompatible old version of the
>library might still reside there); other systems might show
>a different behaviour (e. g. the Solaris cc/ld combination
>seems to honour embedded RPATHs). Nevertheless the linker
>insists on being able to resolve all symbol references
>(even of symbols referenced in a dependent shared library)
>during a linker run producing a non-shared (i. e. executable)
>file.
>
>This usually isn't an issue, because all referenced shared
>libraries are situated either in a default directory or
>in a single project-specific directory which is specified
>in the -rpath option. In your case there are two
>project-specific library directories (d1 and d2), but only
>d1 is covered by the -rpath option of the final link which
>prevents the linker from finding libx2.so.
>
>> If I include the -noinhibit-exec linker option, I get a
>> correctly functioning executable:
>>
>> $ cc -Wall -o main main.c -Wl,-noinhibit-exec -Wl,-rpath,$PWD/d1
>> -L./d1 -lx1
>> /usr/i486-suse-linux/bin/ld: warning: libx2.so, needed by
>> /d1/libx1.so, not found (try using --rpath)
>> ./d1/libx1.so: undefined reference to `x2'
>> $ ldd main
>> libx1.so => /home/mtk/sotest/d1/libx1.so (0x40015000)
>> libc.so.6 => /lib/libc.so.6 (0x4001b000)
>> libx2.so => /home/mtk/sotest/d2/libx2.so (0x400fd000)
>> /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
>> $ main
>> Called x1
>> Called x2
>
>If you simply ignore any unresolved references during the final
>linker step (using the -noinhibit-exec option) the resulting
>executable is perfectly acceptable to the run time linker which
>of course honours the embedded RPATH information and is
>therefore able to locate libx2.so.
>
>The correct linker command to prevent any warnings is
>
>$ cc -Wall -o main main.c \
>> -Wl,-rpath,$PWD/d1,-rpath-link,$PWD/d1:$PWD/d2 \
>> -L./d1 -lx1
>
>Alternatively,
>
>$ cc -Wall -o main main.c \
>> -Wl,-rpath,$PWD/d1,-rpath-link,$PWD/d2 \
>> -L./d1 -lx1
>
>works as well, because $PWD/d1 is already specified
>by the -rpath option.
>
>$ cc -Wall -o main main.c \
>> -Wl,-rpath,$PWD/d1:$PWD/d2 \
>> -L./d1 -lx1
>
>works too, but is inferior, because it records a
>direct dependency on libx2.so in main which isn't
>the case. Furthermore, if you plan to replace
>libx1.so by a future version which is no longer
>dependent on libx2.so, you'll have to relink main
>to get rid of the obsolete dependence on libx2.so
>(the run time linker insists on all referenced
>shared objects to be present, even if the program
>doesn't reference any symbols in them, directly
>or indirectly).
>
>Hth,
>G. Brein ([EMAIL PROTECTED])
------------------------------
From: Loris Caren <[EMAIL PROTECTED]>
Subject: md5sum'ing block devices
Date: Mon, 30 Apr 2001 12:44:52 +0100
Reply-To: [EMAIL PROTECTED]
Can anybody explain why I am having trouble doing md5sum /dev/cdrom
on most CDROM drives whereas if I mount the same CD and then do the
md5sum on all files found, it works?; I get sector read errors if
I do md5sum /dev/cdrom but don't if I mount /dev/cdrom /mnt;
find /mnt -type f -exec md5sum {} \; OK, this isn't reading quite
as much of the disk and certainly in a different order, but it proves that
every byte can be read. NB: I've got one SCSI CDROM unit which always works
fine, giving a sensible signature for the cd image and no errors.
Surely, all mount does is put a layer of file-system 'understanding'
over the raw stream of bytes available by reading /dev/cdrom?
Is there error correction inherent in CD media being used by the block
device driver?
Is there (any more) error correction inherent in (say) an ISO9660
file system being used by the VFS layer?
The other peculiarity that I notice is that doing md5sum (or dd) of
a floppy or hard disk examines the full media, irrespective of what data is
on it.
However, the /dev/cdrom seems different in ignoring any unused capacity
on the media - doing dd if=/dev/cdrom of=/dev/null is much quicker if the
CD installed is only 50M used capacity, rather than if a full 650M CD
is tried. Can anybody offer an explanation for this apparent
inconsistency?
------------------------------
From: [EMAIL PROTECTED] (Michael Kerrisk)
Subject: Re: About the syscall wait4
Date: Mon, 30 Apr 2001 11:53:15 GMT
On Fri, 27 Apr 2001 12:34:48 +0200, Moslem Belkhiria
<[EMAIL PROTECTED]> wrote:
>Hello,
>
>
>Can someone explain me the significance of the EINTR error when a
>wait4() system call returns -1 ?
The system call was interrupted by the delivery of a handled signal.
You will probably find more info on this in the
comp.unix.programmer.faq, or any of a number of good books such as
Stevens Advanced Programming in the Unix Environment
Cheers
Michael
------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: How to write to file in Linux Kernel
Date: Mon, 30 Apr 2001 12:09:33 +0000
wdz wrote:
>
> I am working something in Linux Kernel. I just
> want to write to a file when I was in kernel.
> What should I do?
> Thanks in advance!
>
> Michael
Look through the last two weeks postings on
the subject.
--
Kasper Dupont
------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Crossposted-To: comp.unix.programmer,linux.redhat.devel,linux.redhat.development
Subject: Re: mount
Date: Mon, 30 Apr 2001 12:13:42 +0000
Darren wrote:
>
> Peter T. Breuer <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > In comp.os.linux.development.system Darren <[EMAIL PROTECTED]> wrote:
> > > Is there a programmatical way to mount/unmount a volume preferably in
> the
> > > same way the mount gnu command does?
> >
> > man 2 mount
> >
> thanks Peter, that was just what I was looking for
You just have to be aware of one potential
problem. The mount program also updates the
file /etc/mtab, this will not happen if you
call the mount syscall. You have to update
this file if mount, df and posibly other
programs should be aware of your mount.
--
Kasper Dupont
------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Pipe Problem
Date: Mon, 30 Apr 2001 12:27:19 +0000
Frank wrote:
>
[...]
>
> Thanks, I figured that it had something to do with using one pipe. And,
> thanks, I didn't think about the queuing effect that I had with the pipes.
> I guess a way to do it is to make an array of pipes, though, wouldn't that
> be an unoptimized method since I might have so many pipes created at once?
> Or, would the performance outweigh the memory I would be allocating for
> those pipes?
>
> Frank
If you want to put more than 100 commands in a pipeline,
you might have to consider the resource usage, otherwise
it probably is not going to be a problem.
An array of pipes is probably the easiest way to get
something working, but it is not the most effiecient you
can do.
If you first create an array of pipes and then forks the
processes each process has to close all the pipes it
does not need. If you have many open pipes the fork()
call could become a problem, in a more efficient algorithm
the parent would only have two pipes open at the same
time, after each fork the parent would close one pipe and
then create a new to be used by the next child.
Notice that if you forget to close the writer side of some
pipe the reader side will not detect end of file but
instead block at the end of the file.
--
Kasper Dupont
------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: what is the means of andl in at&t Asm??
Date: Mon, 30 Apr 2001 12:41:40 +0000
Robert Redelmeier wrote:
>
> hushui wrote:
> >
> > What is the meaning of GET_CURRENT in system call ??
> > in entry.S there are many definations about sysytem calls .
> >
> > I want to know some about GET_CURRENT which is used as GET_CURRENT(%ebx)
> > the defination of it is following
> > #define GET_CURRENT(reg) \
> > movl %esp, reg; \
> > andl $-8192, reg;
> >
> > put the value of reg to esp then andl $-8192, reg;
> > What is this means???
>
> This #define creates an assembler macro using AT&T syntax:
>
> The first line moves the stack pointer (%esp) into the
> target register, %ebx in this case.
>
> The second line masks (AND) that register with -8192,
> better understood as 0xFFFFFFE000 to clear the lower
> 13 bits.
>
> Presumably, this macro is used to return the base
> (lowest) address of the current stack. It isn't very
> general code because it assumes that the stack size
> is an invarient 8 KB.
>
> -- Robert
It doesn't have to be very general, it is only
intended to be used inside the Linux kernel on
i386 machines.
For each process on the system there is allocated
one 8KB alligned chunk of memory for task
structure and stack.
+-------------+------+-------+
| Task struct | Free | Stack |
+-------------+------+-------+
^ ^
\_ Current \_ esp
The macro computes a pointer to the current task
struct from the esp value.
Notice that this is the kernel mode stack, there
is a variable size stack for user mode.
It is the intention that the task struct and the
stack each should fit into one 4KB page, but one
of them can grow beyond this as long as they don't
sum to more than 8KB.
I don't know how much of this memory is actually
used, perhaps you could shrink it to 4KB and still
have a working system.
--
Kasper Dupont
------------------------------
From: "Nick Lockyer" <[EMAIL PROTECTED]>
Subject: Re: md5sum'ing block devices
Date: Mon, 30 Apr 2001 13:33:23 +0100
Most likely because doing md5sum /dev/cdrom will attempt to read the entire
650M image, and the chances are that the entire CD was not written.
You are reading the phyical device with /dev/cdrom which is below the file
system layer.
Loris Caren <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Can anybody explain why I am having trouble doing md5sum /dev/cdrom
> on most CDROM drives whereas if I mount the same CD and then do the
> md5sum on all files found, it works?; I get sector read errors if
> I do md5sum /dev/cdrom but don't if I mount /dev/cdrom /mnt;
> find /mnt -type f -exec md5sum {} \; OK, this isn't reading quite
> as much of the disk and certainly in a different order, but it proves that
> every byte can be read. NB: I've got one SCSI CDROM unit which always
works
> fine, giving a sensible signature for the cd image and no errors.
>
> Surely, all mount does is put a layer of file-system 'understanding'
> over the raw stream of bytes available by reading /dev/cdrom?
> Is there error correction inherent in CD media being used by the block
> device driver?
> Is there (any more) error correction inherent in (say) an ISO9660
> file system being used by the VFS layer?
>
> The other peculiarity that I notice is that doing md5sum (or dd) of
> a floppy or hard disk examines the full media, irrespective of what data
is
> on it.
> However, the /dev/cdrom seems different in ignoring any unused capacity
> on the media - doing dd if=/dev/cdrom of=/dev/null is much quicker if the
> CD installed is only 50M used capacity, rather than if a full 650M CD
> is tried. Can anybody offer an explanation for this apparent
> inconsistency?
------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: do_irq and do_softirq
Date: Mon, 30 Apr 2001 12:51:15 +0000
[EMAIL PROTECTED] wrote:
>
> Hello,
>
> in the do_irq() function (\arch\i386\kernel\irq.c) do_softirq() is
> called.
> In do_softirq() (\kernel\softirq.c) interrupts get enabled.
> Isn't this possible: do_softirq() can get interrupted bij an IRQ, so
> do_irq() is called which finally calls do_softirq(), which gets
> interrupted again... and so on...
> If not (probably) then what stops this from happening?
>
> Bart
I don't know if this is the answer to your particular
question, but I hope it will help you.
There are four different ways to block IRQ's all of
them have to allow IRQ's before it can happen:
1. The CPU have the interrupt bit in the flags register.
2. The IRQ controller wait for an acknowledge via
port 0x20 before sending the next interrupt.
3. The IRQ controller have a mask of allowed interrupts,
so the individual interrupts can be turned on/off.
4. Some controllers can be started and stopped through
an I/O port.
--
Kasper Dupont
------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: How to write to a file in Linux Kernel
Date: Mon, 30 Apr 2001 12:53:19 +0000
ouyang wrote:
>
> I am using Linux Kernel 2.2.16.
> I want to write to a file when I am in kernel.
> What should I do?
> Your help will be greatly appreciated.
>
> vincent
Look through the last two weeks postings on
the subject.
--
Kasper Dupont
------------------------------
From: "Nick Lockyer" <[EMAIL PROTECTED]>
Subject: Re: troubleshooting a kernel mod
Date: Mon, 30 Apr 2001 13:42:26 +0100
The book Linux Device Drivers by O'Reilly has a comprehensive section on
debugging modules. I believe that this book is available somewhere online.
Wayne <[EMAIL PROTECTED]> wrote in message
news:D8gF6.2064$[EMAIL PROTECTED]...
> Is there anything out there to help with troubleshooting a kernel mod
other
> than a bunch of printk's ?? perhaps something you can trace with?
>
>
------------------------------
From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: md5sum'ing block devices
Date: Mon, 30 Apr 2001 13:11:14 +0000
Nick Lockyer wrote:
>
> Most likely because doing md5sum /dev/cdrom will attempt to read the entire
> 650M image, and the chances are that the entire CD was not written.
>
> You are reading the phyical device with /dev/cdrom which is below the file
> system layer.
>
> Loris Caren <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Can anybody explain why I am having trouble doing md5sum /dev/cdrom
> > on most CDROM drives whereas if I mount the same CD and then do the
> > md5sum on all files found, it works?; I get sector read errors if
> > I do md5sum /dev/cdrom but don't if I mount /dev/cdrom /mnt;
> > find /mnt -type f -exec md5sum {} \; OK, this isn't reading quite
> > as much of the disk and certainly in a different order, but it proves that
> > every byte can be read. NB: I've got one SCSI CDROM unit which always
> works
> > fine, giving a sensible signature for the cd image and no errors.
> >
> > Surely, all mount does is put a layer of file-system 'understanding'
> > over the raw stream of bytes available by reading /dev/cdrom?
> > Is there error correction inherent in CD media being used by the block
> > device driver?
> > Is there (any more) error correction inherent in (say) an ISO9660
> > file system being used by the VFS layer?
> >
> > The other peculiarity that I notice is that doing md5sum (or dd) of
> > a floppy or hard disk examines the full media, irrespective of what data
> is
> > on it.
> > However, the /dev/cdrom seems different in ignoring any unused capacity
> > on the media - doing dd if=/dev/cdrom of=/dev/null is much quicker if the
> > CD installed is only 50M used capacity, rather than if a full 650M CD
> > is tried. Can anybody offer an explanation for this apparent
> > inconsistency?
When using a CD there are more logical layers than
just blockdevice and filesystem.
/dev/cdrom is below the filesystem layer, but above
the layer implementing tracks. Some of the
specifications used span over multiple different
layers making the entire design a mess.
Here are a few hints that will make more things
work like expected:
Don't mix CD/DA and ROM tracks one the same CD.
Don't write more than one track on a CD-ROM, some
offsets used in the ISO filesystem is meassured
from the start of the CD not the start of the track,
on the first track this is the same and it works
like expected.
Don't use multisession CD's
Record disks with DAO (Disk at once) instead of
TAO (Track at once). Using TAO can make the end of
tracks unreadable.
>From the original description of the problem I
would guess that the problem is that the disk is
recorded with TAO. Notice that reading every byte
of every file on the CD does not necesarrily read
every byte of the disk. The filesystem does not
have to use the entrie block device, when recording
a CD it is common practice to pad with a number of
zero bytes at the end of the disk. Cdrecord has an
option to pad with 15 sectors (30KB). This padding
prevents problems reading files from a TAO disk,
because the unreadable part of the disk is from my
experience at most 8 sectors.
--
Kasper Dupont
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to the
comp.os.linux.development.system newsgroup.
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************