Re: [OT] Need a little C programming help

2006-09-12 Thread Andreas Fester
Those errors can occur if you include files within a C block.
For example, the following works well:

#include stdlib.h
int main() {}

but this fails with exactly the errors you have seen:

int main() {
#include stdlib.h
}

This can happen sometimes by accident, for example when you
are conditionally compiling braces within #ifdef ... #endif
and forget one of the matching end braces, like

#ifdef SOME_CONDITION
{
#endif

...

/* here we should have the same condition with the closing braces

#include stdlib.h /* this will fail */

The compiler sees the inline functions declared in the header file
inside another function, takes it as nested function (a gcc extension)
and in your case throws an error because the function does not meet
some specific criteria (obviously a local nested function can not
have an extern binding).

If you dont find the mistake easily, try checking the precompiled
output. cc -E -o swrc.I swrc.c preprocesses the c file into swrc.I
which you can then check for non-matching braces.

Regards,

Andreas

Mike Reinehr wrote:
[...]
 This evening, I had to make very minor change to the program but when I 
 attempted to compile I received the following error output:
 
 [EMAIL PROTECTED]:~/tmp$ cc swrc.c
 In file included from /usr/include/sys/types.h:219,
  from /usr/include/stdlib.h:433,
  from swrc.c:5:
 /usr/include/sys/sysmacros.h: In function `main':
 /usr/include/sys/sysmacros.h:43: error: nested function `gnu_dev_major' 
 declared `extern'
[...]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: which package to play DVD's ???

2006-09-12 Thread Jaime Ochoa Malagón

Kaffeine is another choice, that works for me...

On 9/8/06, Andrew Sharp [EMAIL PROTECTED] wrote:

On Wed, Sep 06, 2006 at 03:20:38PM +0200, Albert Dengg wrote:
 On Wed, Sep 06, 2006 at 07:59:19AM -0500, helices wrote:

  What do you think?
 well, i personally user xine-ui or mplayer from
 deb http://www.debian-multimedia.org sid main

Well, slightly OT because this isn't about DVDs, but currently mplayer
and xine both have the same bug when playing mpeg4's -- they show only
the upper left corner of the content in the video window.  vlc is not
having this problem, but vlc has other annoyances, one of which is that
it is somewhat cruder.  This issue with mp4's is not amd64 specific,
it is the same on my 32 bit boxen.  This is for etch.  Older versions
of both those programs work fine.  I don't know if they are both using
the same broken library/codec or what, but vlc uses a slightly different
version of one of the libraries.

Cheers,

a


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]





--
Engañarse por amor es el engaño más terrible;
es una pérdida eterna para la que no hay compensación
ni en el tiempo ni en la eternidad.

Kierkegaard

Jaime Ochoa Malagón
Integrated Technology
Tel: (55) 52 54 26 10



Re: Packagehandling

2006-09-12 Thread Goswin von Brederlow
[EMAIL PROTECTED] (Hans-J. Ullrich) writes:

 Dear maintainers and users, 

 there is a little thing I think should be discussed. 

 Whenever essentiell packages (especially the kernel) is released with a new 
 version, there is no possibilty fall back to the old one, if things crash.

 For example, some time ago there was a ne kernel, but no new header-files. 
 The 
 package was not released. So there was no possibility, to build 
 kernel-modules. Falling back to the old version was not possible, too, as the 
 kernel was no more in the repository.

Kernel versions always get a new package name as do revisions with abi
changes. For those the old one remains on your system. Only revisions
that do not change the ABI replace the old kernel.

As for there being no header files that must have been a bug. But you
wouldn't have to rebuild any modules. If the name remained the same
then the abi did too and the old modules still work.

 Now there is the problem again: I have got the new kernel-version 
 2.6.17-2-amd64, which breaks my wireless. (module bcm43xx, see bugs).

 I cannot fall back, as the old kernel (version 2.6.17-1-amd64-k8) is no more 
 in the repository. 

Why did you remove the old kernel on update? Didn't watch what
apitude was doing? That is your fault.

 So, please let the old versions for some time in the repository, to give the 
 people the possibilty, to fall back at problems. In my case ist is only 
 wireless, but worse case could be i.e. network card or something else.

The old package remains in the archive for 3 days if you go and look
into pool manualy. Then there is testing and also
snapshot.debian.net. Is that not enough?

 I see there no problems, to get two kernels installed. 

And most people have.

 Best regards !

 Hans

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: performance of AAC-RAID (ICP9087MA)

2006-09-12 Thread Raimund Jacob
Erik Mouw wrote:

Hello! And thanks for your suggestions.

 On Fri, Sep 08, 2006 at 05:08:52PM +0200, Raimund Jacob wrote:
 Checking out a largish CVS module is no fun. The data is retrieved via
 cvs pserver from the file server and written back via NFS into my home
 directory. This process is sometimes pretty quick and sometimes blocks
 in between as if the RAID controller has to think about the requests. I
 know this phenomenon only from a megaraid controller, which we
 eventuelly canned for a pure linux software raid (2 disks mirror). Also,
 compiling in the nfs-mounted home directory is too slow - even on a
 1000Mbit link.

 Try with a different IO scheduler. You probably have the anticipatory
 scheduler, you want to give the cfq scheduler a try.
 
   echo cfq  /sys/block/[device]/queue/scheduler
 
 For NFS, you also want to increase the number of daemons. Put the line
 
   RPCNFSDCOUNT=32
 
 in /etc/default/nfs-kernel-server .

Thanks for these hints. In the meantime I was also reading up the
NFS-HOWTO on the performance subject. Playing around with the
rsize/wsize did not turn up much - seems they dont really matter in my case.

My largish CVS-module checks out (cvs up -dP actually) in about 1s when
I do it locally on the server machine. It also takes about 1s when I
check it out on a remote machine but on a local disk. On the same remote
machine via NFS it takes about 30s. So NFS is actually the problem here,
not the ICP.

Furthermore I observed this: I ran 'vmstat 1'. Checking out locally
shows a 'bo' of about 1MB during the second it takes. During the
checkount via NFS there is a sustained 2 to 3 MB 'bo' on the server. So
my assumption is that lots of fs metadata get updated during that 30s
(files dont actually change) and due to the sync nature of the mount
everything is committed to disk pretty hard (ext3) - and that is what
I'm waiting for.

Here is what I will try next (when people leave the office):

- Mount the exported fs as data=journal - the NFS-HOWTO says this might
improve things. I hope this works with remount since reboot is not an
option.

- Try an async nfs export - There is an UPS on the server anyway.

- Try the cfq scheduler and even more increased RPCNFSDCOUNT thing (I
have 12 already on an UP machine). Due to my observations I dont expect
much here but it's worth a try.

Anyone thinks one of those is a bad idea? :)

Raimund

-- 
Die Lösung für effizientes Kundenbeziehungsmanagement.
Jetzt informieren: http://www.universal-messenger.de

Pinuts media+science GmbH http://www.pinuts.de
Dipl.-Inform. Raimund Jacob   [EMAIL PROTECTED]
Krausenstr. 9-10  voice : +49 30 59 00 90 322
10117 Berlin  fax   : +49 30 59 00 90 390
Germany


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: performance of AAC-RAID (ICP9087MA)

2006-09-12 Thread Erik Mouw
On Tue, Sep 12, 2006 at 11:50:46AM +0200, Raimund Jacob wrote:
 Erik Mouw wrote:
 
 Hello! And thanks for your suggestions.
 
  On Fri, Sep 08, 2006 at 05:08:52PM +0200, Raimund Jacob wrote:
  Checking out a largish CVS module is no fun. The data is retrieved via
  cvs pserver from the file server and written back via NFS into my home
  directory. This process is sometimes pretty quick and sometimes blocks
  in between as if the RAID controller has to think about the requests. I
  know this phenomenon only from a megaraid controller, which we
  eventuelly canned for a pure linux software raid (2 disks mirror). Also,
  compiling in the nfs-mounted home directory is too slow - even on a
  1000Mbit link.
 
  Try with a different IO scheduler. You probably have the anticipatory
  scheduler, you want to give the cfq scheduler a try.
  
echo cfq  /sys/block/[device]/queue/scheduler
  
  For NFS, you also want to increase the number of daemons. Put the line
  
RPCNFSDCOUNT=32
  
  in /etc/default/nfs-kernel-server .
 
 Thanks for these hints. In the meantime I was also reading up the
 NFS-HOWTO on the performance subject. Playing around with the
 rsize/wsize did not turn up much - seems they dont really matter in my case.

In my case it did matter: setting them to 4k (ie: CPU pagesize)
increased throughput.

 My largish CVS-module checks out (cvs up -dP actually) in about 1s when
 I do it locally on the server machine. It also takes about 1s when I
 check it out on a remote machine but on a local disk. On the same remote
 machine via NFS it takes about 30s. So NFS is actually the problem here,
 not the ICP.

One of the main problems with remote CVS is that it uses /tmp on the
server. Make sure that is a fast and large disk as well, or tell CVS to
use another (fast) directory as scratch space.

 Furthermore I observed this: I ran 'vmstat 1'. Checking out locally
 shows a 'bo' of about 1MB during the second it takes. During the
 checkount via NFS there is a sustained 2 to 3 MB 'bo' on the server. So
 my assumption is that lots of fs metadata get updated during that 30s
 (files dont actually change) and due to the sync nature of the mount
 everything is committed to disk pretty hard (ext3) - and that is what
 I'm waiting for.

Mounting filesystems with -o noatime,nodiratime makes quite a
difference.

If you're using ext3 with lots of files in a single directory, make
sure you're using htree directory indexing. To see if it is enabled:

  dumpe2fs /dev/whatever

Look for the features line, if it has dir_index, it is enabled. If
not, enable it with (can be done on a mounted filesystem):

  tune2fs -O dir_index /dev/whatever

Now all new directories will be created with a directory index. If you
want to enable it on all directories, unmount the filesystem and run
e2fsck on it:

  e2fsck -f -y -D /dev/whatever

Increasing the journal size can also make a difference, or try putting
the journal on a separate device (quite invasive, make sure you have a
backup). See tune2fs(8).

 Here is what I will try next (when people leave the office):
 
 - Mount the exported fs as data=journal - the NFS-HOWTO says this might
 improve things. I hope this works with remount since reboot is not an
 option.

I don't think it makes a difference, I'd rather say it makes things
worse cause it forces all *data* (and not only metadata) through the
journal.

 - Try an async nfs export - There is an UPS on the server anyway.

async makes it indeed faster.

 - Try the cfq scheduler and even more increased RPCNFSDCOUNT thing (I
 have 12 already on an UP machine). Due to my observations I dont expect
 much here but it's worth a try.

It did make a difference over here, that's why I increased it to 32.

 Anyone thinks one of those is a bad idea? :)

I only think data=journal is a bad idea.


Erik

-- 
+-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Need a little C programming help

2006-09-12 Thread Erik Mouw
On Mon, Sep 11, 2006 at 05:44:37PM -0500, Mike Reinehr wrote:
 My apology for taking up the groups time with an off-topic request for help. 
 I 
 don't think that this has anything at all to do with 64-bit processing. What 
 I know about c programming wouldn't take me five minutes to tell, so I'm 
 easily stumped by compiler error messages.
 
 I have a very small c program that is running on our local AMD64 server, 
 which 
 is running an up to date Debian Sarge. I've compiled this program many times 
 over the past year by simply typing `cc swrc.c` and then weeding out my many 
 c errors.
 
 This evening, I had to make very minor change to the program but when I 
 attempted to compile I received the following error output:
 
 [EMAIL PROTECTED]:~/tmp$ cc swrc.c

Try gcc -Wall instead, that gives you a lot more hints about what's
wrong.

 In file included from /usr/include/sys/types.h:219,
  from /usr/include/stdlib.h:433,
  from swrc.c:5:
 /usr/include/sys/sysmacros.h: In function `main':
 /usr/include/sys/sysmacros.h:43: error: nested function `gnu_dev_major' 
 declared `extern'
 /usr/include/sys/sysmacros.h:49: error: nested function `gnu_dev_minor' 
 declared `extern'
 /usr/include/sys/sysmacros.h:55: error: nested function `gnu_dev_makedev' 
 declared `extern'
 [EMAIL PROTECTED]:~/tmp$

Nested functions? Looks like you're including a header file in a
function instead of outside of it. If you know what you're doing it
shouldn't be a problem, but as a general rule don't even try that.

 I would really appreciate someone telling me what I'm doing wrong, or at 
 least 
 giving me a hint!

Always compile with -Wall. Adding -Wshadow sometimes also reveals
interesting bugs.


Erik

-- 
+-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Packagehandling

2006-09-12 Thread Hans-J. Ullrich
Am Dienstag, 12. September 2006 09:52 schrieb Goswin von Brederlow:
 [EMAIL PROTECTED] (Hans-J. Ullrich) writes:
  Dear maintainers and users,
 
  there is a little thing I think should be discussed.
 
  Whenever essentiell packages (especially the kernel) is released with a
  new version, there is no possibilty fall back to the old one, if things
  crash.
 
  For example, some time ago there was a ne kernel, but no new
  header-files. The package was not released. So there was no possibility,
  to build
  kernel-modules. Falling back to the old version was not possible, too, as
  the kernel was no more in the repository.

 Kernel versions always get a new package name as do revisions with abi
 changes. For those the old one remains on your system. Only revisions
 that do not change the ABI replace the old kernel.

 As for there being no header files that must have been a bug. But you
 wouldn't have to rebuild any modules. If the name remained the same
 then the abi did too and the old modules still work.


Dear Goswin, 

yes , you are right, the old header-files were not going to remove by 
aptitude. In fact, I was looking for a way, to reinstall an old version of 
the kernel and the header-files. I did not know about debian.snapshot.net 
since some days ago, but now I do , and it will help me.  

  Now there is the problem again: I have got the new kernel-version
  2.6.17-2-amd64, which breaks my wireless. (module bcm43xx, see bugs).
 
  I cannot fall back, as the old kernel (version 2.6.17-1-amd64-k8) is no
  more in the repository.

 Why did you remove the old kernel on update? Didn't watch what
 apitude was doing? That is your fault.


Yes, it was. After I removed everything old, I discovered the error. Meanwhile 
I found out, it was not the kernl-module , but the package udev (which was 
corrected one day later by the maintainers. Thank you for this !)

  So, please let the old versions for some time in the repository, to give
  the people the possibilty, to fall back at problems. In my case ist is
  only wireless, but worse case could be i.e. network card or something
  else.

 The old package remains in the archive for 3 days if you go and look
 into pool manualy. Then there is testing and also
 snapshot.debian.net. Is that not enough?


That is enough ! Although I think, 3 days is a little bit short, maybe 7 days 
might be better (don't blame me for my opinion. :-)   )


  I see there no problems, to get two kernels installed.

 And most people have.

Yes, in the future me too.


  Best regards !
 
  Hans

 MfG
 Goswin

Best regards, and thank you for answering this mail !

Hans


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Need a little C programming help

2006-09-12 Thread Mike Reinehr
Andreas  Erik,

Thanks very much for your help. The include files were, in fact, within the 
main block. I'd swear that they have been there for years, but I'll worry 
about that some other time. It's quite possible that I made a change six 
months or so ago and now just don't remember. Anyway, relocating them as you 
suggested completely solved the problem.

Thanks, also, for the compiling tips. I'll start experimenting with them.

Sincerely,

cmr

On Tuesday 12 September 2006 07:36, Erik Mouw wrote:
 On Mon, Sep 11, 2006 at 05:44:37PM -0500, Mike Reinehr wrote:
  My apology for taking up the groups time with an off-topic request for
  help. I don't think that this has anything at all to do with 64-bit
  processing. What I know about c programming wouldn't take me five minutes
  to tell, so I'm easily stumped by compiler error messages.
 
  I have a very small c program that is running on our local AMD64 server,
  which is running an up to date Debian Sarge. I've compiled this program
  many times over the past year by simply typing `cc swrc.c` and then
  weeding out my many c errors.
 
  This evening, I had to make very minor change to the program but when I
  attempted to compile I received the following error output:
 
  [EMAIL PROTECTED]:~/tmp$ cc swrc.c

 Try gcc -Wall instead, that gives you a lot more hints about what's
 wrong.

  In file included from /usr/include/sys/types.h:219,
   from /usr/include/stdlib.h:433,
   from swrc.c:5:
  /usr/include/sys/sysmacros.h: In function `main':
  /usr/include/sys/sysmacros.h:43: error: nested function `gnu_dev_major'
  declared `extern'
  /usr/include/sys/sysmacros.h:49: error: nested function `gnu_dev_minor'
  declared `extern'
  /usr/include/sys/sysmacros.h:55: error: nested function `gnu_dev_makedev'
  declared `extern'
  [EMAIL PROTECTED]:~/tmp$

 Nested functions? Looks like you're including a header file in a
 function instead of outside of it. If you know what you're doing it
 shouldn't be a problem, but as a general rule don't even try that.

  I would really appreciate someone telling me what I'm doing wrong, or at
  least giving me a hint!

 Always compile with -Wall. Adding -Wshadow sometimes also reveals
 interesting bugs.


 Erik

 --
 +-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --

 | Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands

-- 
Debian 'Sarge': Registered Linux User #241964

More laws, less justice. -- Marcus Tullius Ciceroca, 42 BC



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Installing Intel Fortran Compiler

2006-09-12 Thread João Marcelo
Hello Everybody,I'm trying to install Intel Fortran Compiler 9.1.036 on my Debian AMD64 Opteron Box. Unfortunely, i can't use gnu fortran to compile the application i want (http://www.cpmd.org/), because it needs the 'Cray Pointer' extension to Fortran77 for dynamical memory management. According to http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-linux.htmldoesn't handle the above extension. So I have to use Intel's compiler, that is free of charge, but not open source :-( . During the installation process, the following message appears:The installation program was not able to detect the IA32 version of the following libraries installed :libstdc++libgccglibcWithout these libraries, the compiler will not function properly.These libraries, if not installed, can be installed fromthe OS discs after finishing the compiler installation.Please refer to Release Notes for more
 information.Can I have both versions of glibc (64 and 32) installed? Any packages names?Thanks for your answers.[]'s 
		 
Yahoo! Search 
Música para ver e ouvir: You're Beautiful, do James Blunt

Re: Installing Intel Fortran Compiler

2006-09-12 Thread Lennart Sorensen
On Tue, Sep 12, 2006 at 05:00:20PM +, Jo?o Marcelo wrote:
 I'm trying to install Intel Fortran Compiler 9.1.036 on my Debian AMD64 
 Opteron Box. Unfortunely, i can't use gnu fortran to compile the application 
 i want (http://www.cpmd.org/), because  it needs the 'Cray Pointer' extension 
 to Fortran 77 for dynamical memory management. According to 
 http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-linux.html
 doesn't handle the above extension. So I have to use Intel's compiler, that 
 is free of charge, but not open source :-( . 
 
 During the installation process, the following message appears:
 
 The installation program was not able to detect the IA32 version of the 
 following libraries installed   :
 libstdc++
 libgcc
 glibc
 
 Without these libraries, the compiler will not function properly.
 These libraries, if not installed, can be installed from
 the OS discs after finishing the compiler installation.
 Please refer to Release Notes for more information.
 
 Can I have both versions of glibc (64 and 32) installed? Any packages names?
 
 Thanks for your answers.

I would have thought ia32-libs package would take care of those.

--
Len Sorensen


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Installing Intel Fortran Compiler

2006-09-12 Thread Paul Brook
 I'm trying to install Intel Fortran Compiler 9.1.036 on my Debian AMD64
 Opteron Box. Unfortunely, i can't use gnu fortran to compile the
 application i want (http://www.cpmd.org/), because  it needs the 'Cray
 Pointer' extension to Fortran 77 for dynamical memory management. According
 to http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-linux.html
 doesn't handle the above extension. 

That page is wrong. As mentioned in the main CPMD FAQ, gfortran can compile 
CPMD.

Paul


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: updated howto

2006-09-12 Thread Francesco Pietra
Hi Roberto:
Thanks a lot, I was just re-installing amd64 etch raid1, having changed disks 
(better quality, smaller size, different partition scheme, little to add to 
the base system).

I still found useful as raid howto:
http://www200.pair.com/mecham/raid/raid1-page3.htm (and page2)

However, what about the statement
In A RAID system it is a good idea to avoid kernel version upgrades (security 
upgrades should be performed of course) in that mentioned www (concerned 
with Sarge)? Is it a valid suggestion today for etch?

regards
francesco pietra

On Monday 11 September 2006 23:05, Roberto Pariset wrote:
 Roberto Pariset ha scritto:
  Hello,
  an updated version of the debian amd64 howto is avaiable in my local
  alioth space at [1]; hopefully someone will benefit from this.
 
  Special thanks and gratitude to Ana for her kindness, ideas and support
  :)
 
  All the best,
  Roberto
 
 
  PS: If you want to reply, please don't forget to add me in CC because I
  am not subscribed to this list.
 
 
  [1] http://haydn.debian.org/~intero-guest/

 I finally granted permission to publish changes. An up-to-date version of
 the HOWTO will therefore be in the usual place [2], while the alternative
 version [1] will not be used any longer.
 Once again thanks to those who helped with comments, suggestions, ideas and
 such. Special thanks to Mr Leigh, for reviewing the schroot section, and
 Miss Guerrero, for supporting all along.

 All the best,
 Roberto


 [2]
 http://alioth.debian.org/docman/view.php/30192/21/debian-amd64-howto.html


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: K8 Mainboards Linux compatibility list

2006-09-12 Thread Max A.

Please help yourself at
http://wiki.debian.org/DebianAMD64/Mainboards

Max

On 9/11/06, Bruno Kleinert [EMAIL PROTECTED] wrote:

hi,

i can report a gigabyte ga-k8ns pro mainboard as working perfectly.

mainboard:  gigabyte ga-k8ns pro
chipset:nforce3
ata:nforce
ata raid:   ite gigaraid
sata:   nforce, sata_sil
scsi:   - (none on board)
network:skge, sk98lin
sound:  intel8x0


cheers, bruno fuddl

--
Among elephants it's not considered cool nor in any good taste
to drain other elephants








--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: USB rescue/boot disk

2006-09-12 Thread Helge Hafting

Joost Kraaijeveld wrote:

Hi ,

I want a bootable USB stick that will boot any machine that allows me to
boot from USB: a Debian Live USB (and not CD). I have found a howto on
the internet (http://feraga.com/) but that one does not seem to work for
me.

Is it actually possible to create an USB rescue/boot disk that contains
a Debian Etch AMD64  or i386 based installation? Is there an image
available somewhere (as the Debian Live Project does not have such an
image (yet?) )?
  

Just about any live/rescue CD/diskette with usb support should
do the trick, I think. Knoppix, for example?
Of course some USB sticks are smaller than a
full CD, in which case you go for one of the smaller rescue/live CDs.
DSL is 50MB, for example.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



boinc-client

2006-09-12 Thread Sythos
Which project support amd64 / x86_64-pc-linux ?

I found only chess

-- 
Sythos - http://www.sythos.net

  ()  ASCII Ribbon Campaign - against html/rtf/vCard in mail
  /\- against M$ attachments


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



RAID help

2006-09-12 Thread Roberto Pariset

Francesco Pietra ha scritto:

I still found useful as raid howto:
http://www200.pair.com/mecham/raid/raid1-page3.htm (and page2)

However, what about the statement
In A RAID system it is a good idea to avoid kernel version upgrades (security 
upgrades should be performed of course) in that mentioned www (concerned 
with Sarge)? Is it a valid suggestion today for etch?




I am sorry but I know nothing about RAID. Hopefully someone who does will 
help you out. Good luck!


All the best,
Roberto


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: boinc-client

2006-09-12 Thread Manolo Díaz
Sythos wrote:
 Which project support amd64 / x86_64-pc-linux ?
 
 I found only chess
 

SETI. You have a debian package in testing and unstable: boinc-app-seti.

Best Regards,
Manolo.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: USB rescue/boot disk

2006-09-12 Thread T
On Mon, 11 Sep 2006 17:29:04 +0200, Joost Kraaijeveld wrote:

 I want a bootable USB stick that will boot any machine that allows me to
 boot from USB: a Debian Live USB (and not CD). ...
 
 Is it actually possible to create an USB rescue/boot disk that contains
 a Debian Etch AMD64  or i386 based installation? Is there an image
 available somewhere?

When talking about rescue Live CD, IMHO, nothing comes close to grml
(grml.org).

What's more exciting is that it also come with an 55M alternative iso,
which is ideal for a Live USB. It is a pure Debian 386 based Live
system that will boot any machine that allows booting usb.

The root fs is read only, so it should be usb friendly.

http://grml.org/faq/#grmlsmall

tong




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: performance of AAC-RAID (ICP9087MA)

2006-09-12 Thread Andrew Sharp
On Tue, Sep 12, 2006 at 02:17:48PM +0200, Erik Mouw wrote:
 On Tue, Sep 12, 2006 at 11:50:46AM +0200, Raimund Jacob wrote:
  Erik Mouw wrote:
  
  Hello! And thanks for your suggestions.
  
   On Fri, Sep 08, 2006 at 05:08:52PM +0200, Raimund Jacob wrote:
   Checking out a largish CVS module is no fun. The data is retrieved via
   cvs pserver from the file server and written back via NFS into my home
   directory. This process is sometimes pretty quick and sometimes blocks
   in between as if the RAID controller has to think about the requests. I
   know this phenomenon only from a megaraid controller, which we
   eventuelly canned for a pure linux software raid (2 disks mirror). Also,
   compiling in the nfs-mounted home directory is too slow - even on a
   1000Mbit link.
  
   Try with a different IO scheduler. You probably have the anticipatory
   scheduler, you want to give the cfq scheduler a try.
   
 echo cfq  /sys/block/[device]/queue/scheduler
   
   For NFS, you also want to increase the number of daemons. Put the line
   
 RPCNFSDCOUNT=32
   
   in /etc/default/nfs-kernel-server .
  
  Thanks for these hints. In the meantime I was also reading up the
  NFS-HOWTO on the performance subject. Playing around with the
  rsize/wsize did not turn up much - seems they dont really matter in my case.
 
 In my case it did matter: setting them to 4k (ie: CPU pagesize)
 increased throughput.
 
  My largish CVS-module checks out (cvs up -dP actually) in about 1s when
  I do it locally on the server machine. It also takes about 1s when I
  check it out on a remote machine but on a local disk. On the same remote
  machine via NFS it takes about 30s. So NFS is actually the problem here,
  not the ICP.
 
 One of the main problems with remote CVS is that it uses /tmp on the
 server. Make sure that is a fast and large disk as well, or tell CVS to
 use another (fast) directory as scratch space.
 
  Furthermore I observed this: I ran 'vmstat 1'. Checking out locally
  shows a 'bo' of about 1MB during the second it takes. During the
  checkount via NFS there is a sustained 2 to 3 MB 'bo' on the server. So
  my assumption is that lots of fs metadata get updated during that 30s
  (files dont actually change) and due to the sync nature of the mount
  everything is committed to disk pretty hard (ext3) - and that is what
  I'm waiting for.
 
 Mounting filesystems with -o noatime,nodiratime makes quite a
 difference.
 
 If you're using ext3 with lots of files in a single directory, make
 sure you're using htree directory indexing. To see if it is enabled:
 
   dumpe2fs /dev/whatever
 
 Look for the features line, if it has dir_index, it is enabled. If
 not, enable it with (can be done on a mounted filesystem):
 
   tune2fs -O dir_index /dev/whatever
 
 Now all new directories will be created with a directory index. If you
 want to enable it on all directories, unmount the filesystem and run
 e2fsck on it:
 
   e2fsck -f -y -D /dev/whatever
 
 Increasing the journal size can also make a difference, or try putting
 the journal on a separate device (quite invasive, make sure you have a
 backup). See tune2fs(8).

These are all good suggestions for speedups, especially this last one,
but I would think that none of this should really be necessary unless your
load is remarkably high, not just one user doing a cvs check out.  I would
strace the cvs checkout with timestamps and see where it is waiting.
It seems to me like this has more to do with some configuration snafu
than any of this stuff.

Why are you trying to configure it this way anyway?  Just use the
standard client/server configuration.  You'll probably be glad you did.
And it seems to work a lot faster that way anyway ~:^)

  Here is what I will try next (when people leave the office):
  
  - Mount the exported fs as data=journal - the NFS-HOWTO says this might
  improve things. I hope this works with remount since reboot is not an
  option.

I personally would NOT do this.  There is a good reason why none of the
top performing journaling file systems journal data by default.

 I don't think it makes a difference, I'd rather say it makes things
 worse cause it forces all *data* (and not only metadata) through the
 journal.

Eggxacly.

a


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: chroot and ia32libs combined

2006-09-12 Thread Jaime Ochoa Malagón

On 9/7/06, Goswin von Brederlow [EMAIL PROTECTED] wrote:

Seb [EMAIL PROTECTED] writes:

 Another question that nobody seems to mention in the various how-tos is
 what happens if you have say both the same 64-bit app in the main system
 and 32-bit app in the chroot?  Because /home is mounted in both systems,
 each version will be messing with each other's config stuff, won't it?
 Say things like ~/.mozilla would change and probably get messed up by
 changes made by each version of firefox.  How is that handled?


I use to run both just to test things, certainly my firsto option is 32 bits...
Both version could use the same configuration because that is only text...




 Cheers,

 --
 Seb

It isn't.

Even worse is when the 32bit and 64bit ~/.application file are
incompatible. One should consider that a bug but it has happened.


zinf as an example




My suggestion is: Don't do it. If you need the 32bit version for
something then use it for everything. The benefit of 64bit is generaly
negible anyway. No point in having say a 64bit and 32bit mozilla.

MfG
Goswin


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]





--
Engañarse por amor es el engaño más terrible;
es una pérdida eterna para la que no hay compensación
ni en el tiempo ni en la eternidad.

Kierkegaard

Jaime Ochoa Malagón
Integrated Technology
Tel: (55) 52 54 26 10