Re: Filezilla: GnuTLS error when using FTPES

2012-01-17 Thread Felip Moll
I remember that no much time ago there was an incompatibility with Proftpd
and Filezilla and I remember some other problems with these two programs.

For example:
http://forum.filezilla-project.org/viewtopic.php?f=2t=23101

I suggest you to search on Google for your specific problem to be sure that
the cause is your computer, because maybe it is not. With gFTP or another
ftp client does it work?

best regards
2012/1/17 Ray Van Dolson ra...@bludgeon.org

 On Tue, Jan 17, 2012 at 04:27:52PM +0100, palmerlwatson wrote:
  When I'm trying to log in to a server via FTPS with Filezilla I get
  these FTP messages from the server:
 
  Response:   220-This is a private system - No anonymous login
  Response:   220 You will be disconnected after 60 minutes of inactivity.
  Command:AUTH TLS
  Response:   234 AUTH TLS OK.
  Status: Initializing TLS...
  Error:  GnuTLS error -50: The request is invalid.
  Error:  Failed to initialize TLS.
  Error:  Could not connect to server
 
  It worked great before on Fedora 14/Filezilla. But now I'm using
  Scientific-Linux with Filezilla (I reinstalled my PC from Fedora to
  Scientific Linu), and it gives this. What am I missing?
 
  I installed Scientific Linux as a Normal Desktop from the 64bit DVD:
 
  [user@pc ~]$ lsb_release -a
  LSB Version:
 
 :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
  Distributor ID:Scientific
  Description:Scientific Linux release 6.1 (Carbon)
  Release:6.1
  Codename:Carbon
  [g@a ~]$ rpm -qa | egrep -i filezilla|gnutls
  gnutls-2.8.5-4.el6.x86_64
  [user@pc ~]$
 
 
  I downloaded Filezilla from here
 (FileZilla_3.5.3_x86_64-linux-gnu.tar.bz2):
 
  http://filezilla-project.org/download.php?type=client
 
  because I didn't find it in the repositories.
 
  Does anybody knows why do I get this answer? I mean what is the
  solution to make it work? (again: connection worked with Fedora 14 on
  the same day.)
 
  Thank you!

 Maybe you have some sort of smart firewall in the middle which doesn't
 recognize the encrypted traffic as part of an FTP session?

 (Or perhaps such a firewall exists on the remote side).

 Ray



Re: Move a SL6 server from md software raid 5 to hardware raid 5

2011-12-20 Thread Felip Moll
Thank you for your answers!.

Regarding to the backups I have an external backup system with bacula +
tapes and another with NFS, so it should not be a problem. The problem is
that I want to do all this process in a short period time to minimize the
downtime.

Be sure that your e-mails will be useful to me. Thank you.

2011/12/20 Nico Kadel-Garcia nka...@gmail.com

 On Tue, Dec 20, 2011 at 5:52 AM, Jason Bronner jason.bron...@gmail.com
 wrote:
  Felip.
 
  always always always: back up the array. create the new array. move the
 old
  array to the new array. destroy the old array.

 Amen. Also, in making the backup, consider using star if you use
 SELinux. rsync and normal tar do not preserve SELinux attributes.

 It is a good time to consider your backup policy. RAID is *not*
 backup, and the white paper from Google at

 http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf



Re: Move a SL6 server from md software raid 5 to hardware raid 5

2011-12-19 Thread Felip Moll
Doing it this way seems to be a high risk operation.

Furthermore I want not do this because then I will have two raids: one raid
per software (md) into one per hardware.. my thoughts are about copying
manually the dirs of the operating system, then modifying configurations..
I think it is a more secure process.

Thanks for the answer jdow ;)


2011/12/20 jdow j...@earthlink.net

 First take a complete backup of the md raid.

 Then if the laws if Innate Perversity of Inanimate Objects you'll be able
 to
 move the disks and have them just work. Your data is protected. (If you had
 no backup IPIO would, of course, lead to the transition failing
 expensively.)

 Even if IPIO does not work you restore from the complete backup to the same
 disks they were on after the hardware RAID assembles itself. (Despite the
 numerous times IPIO seems to work, I still figure it's a silly
 superstition.
 It does lead to a correct degree of paranoia, though.)

 {^_^}


 On 2011/12/19 09:18, Felip Moll wrote:

 Well, I will remake my question to not scare possible answerers:

 How to move a SL6.0 system with md raid (raid per software), to another
 server
 without mantaining the raid per software?

 Thanks!



 2011/12/16 Felip Moll lip...@gmail.com mailto:lip...@gmail.com


Hello all!

Recently I installed and configured a Scientific Linux to run as a high
performance computing cluster with 15 slave nodes and one master. I
 did this
while an older system with RedHat 5.0 was running in order
to avoid users to stop their computations. All gone well. I migrated
 node to
node and now I have a flawlessly cluster with SL6!.

Well, the fact is that while migrating I used the node1 to install SL6
 while
the node0 was hosting the old master operating system. Node1 has less
 ram
and no raid capabilities, so I configured a Raid5 per software when
installing, using md linux software (which comes per default to a
 normal
installation when you select raid). Node0 has a Raid 5 hardware
 controller.

Now I want to move the new master node1, into node0. I thought about
 this
and I have to shutdown node1, node0, and with a LiveCD partition the
harddisk of node0 and copy the contents of the disk of node1 into it.
 Then
make grub install.

All right but, what do you think that I should take in consideration
regarding to Raid and md? I will have to modify /etc/fstab and also
 delete
/etc/mdadm.conf to avoid md running. Anything more?

Thank you very much!





Move a SL6 server from md software raid 5 to hardware raid 5

2011-12-16 Thread Felip Moll
Hello all!

Recently I installed and configured a Scientific Linux to run as a high
performance computing cluster with 15 slave nodes and one master. I did
this while an older system with RedHat 5.0 was running in order
to avoid users to stop their computations. All gone well. I migrated node
to node and now I have a flawlessly cluster with SL6!.

Well, the fact is that while migrating I used the node1 to install SL6
while the node0 was hosting the old master operating system. Node1 has less
ram and no raid capabilities, so I configured a Raid5 per software when
installing, using md linux software (which comes per default to a normal
installation when you select raid). Node0 has a Raid 5 hardware
controller.

Now I want to move the new master node1, into node0. I thought about this
and I have to shutdown node1, node0, and with a LiveCD partition the
harddisk of node0 and copy the contents of the disk of node1 into it. Then
make grub install.

All right but, what do you think that I should take in consideration
regarding to Raid and md? I will have to modify /etc/fstab and also delete
/etc/mdadm.conf to avoid md running. Anything more?

Thank you very much!


Re: Memory leak in Emacs 23.1 - SL.6.1

2011-10-25 Thread Felip Moll
Thanks to all!

I didn't search very well into the bugs database. Sorry.

Jean-Paul, I took a look to your repos but I didn't find any emacs package.
I can only see:

   - 
emacs-doxymacshttp://ftp.lip6.fr/pub/linux/distributions/slsoc/soc/SRPMS/repoview/emacs-doxymacs.html-
Doxygen
   add-on for Emacs/XEmacs
   - 
emacs-w3http://ftp.lip6.fr/pub/linux/distributions/slsoc/soc/SRPMS/repoview/emacs-w3.html-
W3
   package for Emacs


   - 
emacs-sdcchttp://ftp.lip6.fr/pub/linux/distributions/slsoc/soc/x86_64/repoview/emacs-sdcc.html-
Emacs
   extensions for SDCC


But no emacs and emacs-common package.

Best regards.
Felip

2011/10/25 Stephan Wiesand stephan.wies...@desy.de

 Hello Felip,

 On Oct 24, 2011, at 21:48, Felip Moll wrote:

  Recently I installed a SL6.1 Cluster with 16 nodes, slurm resource
 manager, etc.
 
  I use Emacs to edit my files as do some of the researchers of my
 investigation center.
 
  One day I detected that some daemons hunged. I discovered that the kernel
 was killing some processes because the system went out of memory. I couldn't
 reproduce the error anymore, and the thing seemed to occurr on very random
 times.
 
  Since this day, I limited with limits.conf the user stack to 15Gb. (my
 server has 16gb, and consumes normally no more than 1gb).
 
  Today, when I was doing some tasks I could see what is causing the
 problem. It's Emacs!. There seems to be a user that uses emacs and that does
 logout from his session without exiting it. The access to the server is done
 by ssh.
 
  I will try to install the latest 23.3 version with .tar.gz package but I
 like to use the Yum package if it's possible in order to keep the
 installation as clean as possible.
 
  Should I report it to some place?

 it seems someone else already has:
 https://bugzilla.redhat.com/show_bug.cgi?id=732157

 There's a proposed patch attached to that BZ.

 Regards,
Stephan


 --
 Stephan Wiesand
 DESY -DV-
 Platanenallee 6
 15738 Zeuthen, Germany



Memory leak in Emacs 23.1 - SL.6.1

2011-10-24 Thread Felip Moll
Hello all!,

Recently I installed a SL6.1 Cluster with 16 nodes, slurm resource manager,
etc.

I use Emacs to edit my files as do some of the researchers of my
investigation center.

One day I detected that some daemons hunged. I discovered that the kernel
was killing some processes because the system went out of memory. I couldn't
reproduce the error anymore, and the thing seemed to occurr on very random
times.

Since this day, I limited with limits.conf the user stack to 15Gb. (my
server has 16gb, and consumes normally no more than 1gb).

Today, when I was doing some tasks I could see what is causing the problem.
It's Emacs!. There seems to be a user that uses emacs and that does logout
from his session without exiting it. The access to the server is done by
ssh.

I will try to install the latest 23.3 version with .tar.gz package but I
like to use the Yum package if it's possible in order to keep the
installation as clean as possible.


Should I report it to some place?

Thank you.
Felip.

Here there is the info. that I gathered:


[root@acuario ~]# free -m
 total   used   free sharedbuffers cached
Mem: 16080  15904175  0 88   1081
-/+ buffers/cache:  14734   1345
Swap:16381353  16028

[root@acuario ~]# uname -a
Linux acuario 2.6.32-131.17.1.el6.x86_64 #1 SMP Wed Oct 5 17:19:54 CDT 2011
x86_64 x86_64 x86_64 GNU/Linux

[root@acuario ~]# who
root pts/12011-10-24 21:25 (192.168.xx.xx)

[root@acuario ~]# lastlog
...
rrossi   pts/3halley.rmeecimne Mon Oct 24 09:30:54 +0200 2011
...

[root@acuario rrossi]# ps aux | grep rrossi
rrossi   19902 13.1 85.9 14679980 14151208 ?   ROct21 605:18 emacs -nw
acuari_configure.sh
root 21157  0.0  0.0 103228   856 pts/1S+   21:33   0:00 grep rrossi

[root@acuario rrossi]# du -chs ./kratos/cmake_build/acuari_configure.sh
4.0K./kratos/cmake_build/acuari_configure.sh   (it is a 55 lines normal
flat text file, I checked it with my root emacs instance and all went ok)
4.0Ktotal

[root@acuario ~]# emacs --version
GNU Emacs 23.1.1
Copyright (C) 2009 Free Software Foundation, Inc.
GNU Emacs comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of Emacs
under the terms of the GNU General Public License.
For more information about these matters, see the file named COPYING.

[root@acuario ~]# lsof | grep rrossi  (the same as grep emacs)
COMMAND PID  USER   FD  TYPE DEVICE SIZE/OFF
NODE NAME
emacs 19902rrossi  cwd   DIR   8,33 4096
83886081 /home/kratos_common
emacs 19902rrossi  rtd   DIR9,0
4096  2 /
emacs 19902rrossi  txt   REG9,0 11143104
5779966 /usr/bin/emacs-23.1
emacs 19902rrossi  mem   REG9,0   228984
5768561 /usr/lib64/librsvg-2.so.2.26.0
emacs 19902rrossi  mem   REG9,0   155696
2752626 /lib64/ld-2.12.so
emacs 19902rrossi  mem   REG9,0  1904312
2752790 /lib64/libc-2.12.so
emacs 19902rrossi  mem   REG9,0   141576
2752807 /lib64/libpthread-2.12.so
emacs 19902rrossi  mem   REG9,0   598816
2752609 /lib64/libm-2.12.so
emacs 19902rrossi  mem   REG9,047064
2752595 /lib64/librt-2.12.so
emacs 19902rrossi  mem   REG9,0   941440
2752780 /lib64/libglib-2.0.so.0.2200.5
emacs 19902rrossi  mem   REG9,088240
2752794 /lib64/libz.so.1.2.3
emacs 19902rrossi  mem   REG9,0   283584
2752597 /lib64/libgobject-2.0.so.0.2200.5
emacs 19902rrossi  mem   REG9,0   109808
2752599 /lib64/libresolv-2.12.so
emacs 19902rrossi  mem   REG9,0   268200
2752880 /lib64/libdbus-1.so.3.4.0
emacs 19902rrossi  mem   REG9,0   411200
5773480 /usr/lib64/libtiff.so.3.9.4
emacs 19902rrossi  mem   REG9,0   112856
5770325 /usr/lib64/libxcb.so.1.1.0
emacs 19902rrossi  mem   REG9,013168
5772993 /usr/lib64/libXau.so.6.0.0
emacs 19902rrossi  mem   REG9,0   166840
2752803 /lib64/libexpat.so.1.5.2
emacs 19902rrossi  mem   REG9,0   644752
5770324 /usr/lib64/libfreetype.so.6.3.22
emacs 19902rrossi  mem   REG9,0   159728
5771050 /usr/lib64/libpng12.so.0.46.0
emacs 19902rrossi  mem   REG9,0   223040
5769852 /usr/lib64/libfontconfig.so.1.4.4
emacs 19902rrossi  mem   REG9,0   132464
5773317 /usr/lib64/libatk-1.0.so.0.2809.1
emacs 19902rrossi  mem   REG9,0   400528
5769962 /usr/lib64/libpixman-1.so.0.18.4
emacs 19902rrossi  mem   REG9,078848
5779139 /usr/lib64/libotf.so.0.0.0
emacs   

Re: PXE boot is an infinite reinstall

2011-10-18 Thread Felip Moll
If you have Dell Servers there is an option that you can change through
iDrac interface that is Boot once. You check Boot once with PXE and then
reboot the machine. It will boot from PXE only one time so when rebooting
will go throught the HD.

Regards
2011/10/18 Yannick Perret yper...@in2p3.fr

 Steven Timm a écrit :

  The trick that Rocks uses is to have a boot order of (hard disk, pxe)
 and then when you want to reinstall, change two bytes in the
 boot sector to make the hard disk unbootable and it will fall through
 to a PXE boot only at that time.

 What worker node installs at Fermilab do is to have a DHCP server that
 only answers the PXE request when you want to reinstall, and no other
 time, so the PXE request just times out and then you boot off the hard
 drive.

  Here (at CC-IN2P3) we do mostly the same: boot sequence HDD;PXE.

 Destroying partition table works. We also use IPMI. Using IPMI commands (if
 your nodes have a IPMI-compatible card) you can use chassis bootdev pxe,
 whitch tells the node to boot on PXE only the next time.
 So reinstalling a node (with a configured IMPI) consists in chassis
 bootdev pxe + chassis power [cycle|on].

 Regards,
 --

 Y.

 Steve

 On Mon, 17 Oct 2011, ~Stack~ wrote:

  Hello All,

 I ran into another issue with my PXE build out. I searched the net and
 found many people with the same issue, but there was either no response
 or their solution would not work for my needs (requiring access to
 software I don't have). What I am after with this is a completely
 unmanaged automated install of a client on boot.

 I am using dnsmasq as my DNS, DHCP, and TFTP server.

 I have a server and a client. The client boots off the network card with
 PXE. It asks for and receives a IP from the DHCP server and proceeds
 with pulling the TFTP information. The TFTP server passes it a
 pxelinux.0 file along with the default configuration. The
 configuration has a kickstart file and the client continues with a
 flawless install of SL6.1. After the install, the client reboots...and
 the whole process starts over and over and over again. I know why it
 does this (the default boot option is to install), but I can't figure
 out how to control it.

 What I would like is a process where I boot the clients from an off
 state, have them do a fresh install, and then reboot into the new
 install. Nothing is stored on these nodes and a fresh install goes
 rather quickly so I don't mind this option.

 At first I tried scripting an option that just toggled the tftp default
 menu but it wasn't working very smoothly as not all my hosts boot at
 equal speeds.

 I attempted chainloading in the tftp but just made a mess and I didn't
 get any different results. Most likely due to me not understanding it
 properly. I am open to pointers.

 I thought I could do it inside of DNSMasq, but I couldn't find a good
 example and my attempts didn't work.

 I looked online and found projects like systemimager.org but I am
 already doing most of what they provide. I attempted to reverse their
 perl scripts but that is a bigger project then I initially thought. What
 I did like about this project was the ability to tell it to allow a
 single host or a group of hosts to reinstall or to boot off the hard
 disk.

 I have gotten some great pointers from this list so far and I am really
 hoping someone might have another for me. Any ideas?

 Thanks!

 ~Stack~





Ypserv mknetid BUG

2011-10-05 Thread Felip Moll
Dear SL developers,

I have recently installed the package ypserv.x86_64, version 2.19-18.el6,
from repo. @sl/6.0. The same version is in the sl 6.1 repo.

When executing the command /usr/lib64/yp/mknetid , a segmentation fault
occurs.

Here there is some info:
[root@acuari ~]# /usr/lib64/yp/mknetid
Segmentation fault

[root@acuari ~]# strace /usr/lib64/yp/mknetid
execve(/usr/lib64/yp/mknetid, [/usr/lib64/yp/mknetid], [/* 30 vars */])
= 0
brk(0)  = 0x2564000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7f1d03292000
access(/etc/ld.so.preload, R_OK)  = -1 ENOENT (No such file or
directory)
open(/etc/ld.so.cache, O_RDONLY)  = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=71138, ...}) = 0
mmap(NULL, 71138, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f1d0328
close(3)= 0
open(/lib64/libnsl.so.1, O_RDONLY)= 3
read(3,
\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0\0\1\0\0\0\360?\340\3607\0\0\0...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=116136, ...}) = 0
mmap(0x37f0e0, 2198192, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE,
3, 0) = 0x37f0e0
mprotect(0x37f0e16000, 2093056, PROT_NONE) = 0
mmap(0x37f1015000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x15000) = 0x37f1015000
mmap(0x37f1017000, 6832, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x37f1017000
close(3)= 0
open(/lib64/libc.so.6, O_RDONLY)  = 3
read(3,
\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0\0\1\0\0\0\260\355\241\3437\0\0\0...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1904312, ...}) = 0
mmap(0x37e3a0, 3729576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE,
3, 0) = 0x37e3a0
mprotect(0x37e3b86000, 2093056, PROT_NONE) = 0
mmap(0x37e3d85000, 20480, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x185000) = 0x37e3d85000
mmap(0x37e3d8a000, 18600, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x37e3d8a000
close(3)= 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7f1d0327f000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7f1d0327e000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7f1d0327d000
arch_prctl(ARCH_SET_FS, 0x7f1d0327e700) = 0
mprotect(0x37f1015000, 4096, PROT_READ) = 0
mprotect(0x37e3d85000, 16384, PROT_READ) = 0
mprotect(0x37e341f000, 4096, PROT_READ) = 0
munmap(0x7f1d0328, 71138)   = 0
uname({sys=Linux, node=acuari, ...}) = 0
brk(0)  = 0x2564000
brk(0x2585000)  = 0x2585000
open(/etc/passwd, O_RDONLY)   = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=3739, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7f1d03291000
read(3, root:x:0:0:root:/root:/bin/bash\n..., 4096) = 3739
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++
Segmentation fault

dmesg output:
mknetid[22013]: segfault at 0 ip 0037e3a371e2 sp 7fff19e13c80 error
4 in libc-2.12.so[37e3a0+186000]


It's an ugly problem and seems a simple out of bounds reading...

Is it possible to solve the problem?


Thank you,

great work with SL 6.1


Re: Ypserv mknetid BUG

2011-10-05 Thread Felip Moll
A lot of thanks Jean-Paul.

Following your indications I checked out the passwd file. All of the entries
had six : , but at the end of the file, there was a blank line!.

I deleted the blank line and the problem disappeared.

It's good to know this but the Ypserv developers should take care of these
cases and instead of generating a sigsegv, they should warn the user with an
error.

I will check new versions of Ypserv and report the bug to Ypserv developers
if it's still present.

Problem SOLVED.

Thank you.
Felip Moll


2011/10/5 Jean-Paul Chaput jean-paul.cha...@lip6.fr


 Hello Mr Moll,


 mknetid cores when it reads /etc/passwd.

 I've noticed that the passwd file parser is very sensitive on
 malformed lines, especially those with the wrong number of entries
 (some : are missing, there must be exactly six of them)

 If you work in compat mode (/etc/nsswitch.conf), uses:
 (in /etc/passwd)

 +::

 to include the yp entries an *not*:

 +

 But it also may occurs on any normal line...


 Regards,


 On Wed, 2011-10-05 at 10:50 +0200, Felip Moll wrote:
  Dear SL developers,
 
  I have recently installed the package ypserv.x86_64, version
  2.19-18.el6, from repo. @sl/6.0. The same version is in the sl 6.1
  repo.
 
  When executing the command /usr/lib64/yp/mknetid , a segmentation
  fault occurs.
 
  Here there is some info:
  [root@acuari ~]# /usr/lib64/yp/mknetid
  Segmentation fault
 
  [root@acuari ~]# strace /usr/lib64/yp/mknetid
  execve(/usr/lib64/yp/mknetid, [/usr/lib64/yp/mknetid], [/* 30 vars
  */]) = 0
  brk(0)  = 0x2564000
  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
  0) = 0x7f1d03292000
  access(/etc/ld.so.preload, R_OK)  = -1 ENOENT (No such file or
  directory)
  open(/etc/ld.so.cache, O_RDONLY)  = 3
  fstat(3, {st_mode=S_IFREG|0644, st_size=71138, ...}) = 0
  mmap(NULL, 71138, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f1d0328
  close(3)= 0
  open(/lib64/libnsl.so.1, O_RDONLY)= 3
  read(3, \177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0\0\1\0\0\0\360?\340\3607
  \0\0\0..., 832) = 832
  fstat(3, {st_mode=S_IFREG|0755, st_size=116136, ...}) = 0
  mmap(0x37f0e0, 2198192, PROT_READ|PROT_EXEC, MAP_PRIVATE|
  MAP_DENYWRITE, 3, 0) = 0x37f0e0
  mprotect(0x37f0e16000, 2093056, PROT_NONE) = 0
  mmap(0x37f1015000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|
  MAP_DENYWRITE, 3, 0x15000) = 0x37f1015000
  mmap(0x37f1017000, 6832, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|
  MAP_ANONYMOUS, -1, 0) = 0x37f1017000
  close(3)= 0
  open(/lib64/libc.so.6, O_RDONLY)  = 3
  read(3, \177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0\0\1\0\0\0\260\355\241
  \3437\0\0\0..., 832) = 832
  fstat(3, {st_mode=S_IFREG|0755, st_size=1904312, ...}) = 0
  mmap(0x37e3a0, 3729576, PROT_READ|PROT_EXEC, MAP_PRIVATE|
  MAP_DENYWRITE, 3, 0) = 0x37e3a0
  mprotect(0x37e3b86000, 2093056, PROT_NONE) = 0
  mmap(0x37e3d85000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|
  MAP_DENYWRITE, 3, 0x185000) = 0x37e3d85000
  mmap(0x37e3d8a000, 18600, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|
  MAP_ANONYMOUS, -1, 0) = 0x37e3d8a000
  close(3)= 0
  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
  0) = 0x7f1d0327f000
  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
  0) = 0x7f1d0327e000
  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
  0) = 0x7f1d0327d000
  arch_prctl(ARCH_SET_FS, 0x7f1d0327e700) = 0
  mprotect(0x37f1015000, 4096, PROT_READ) = 0
  mprotect(0x37e3d85000, 16384, PROT_READ) = 0
  mprotect(0x37e341f000, 4096, PROT_READ) = 0
  munmap(0x7f1d0328, 71138)   = 0
  uname({sys=Linux, node=acuari, ...}) = 0
  brk(0)  = 0x2564000
  brk(0x2585000)  = 0x2585000
  open(/etc/passwd, O_RDONLY)   = 3
  fstat(3, {st_mode=S_IFREG|0644, st_size=3739, ...}) = 0
  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
  0) = 0x7f1d03291000
  read(3, root:x:0:0:root:/root:/bin/bash\n..., 4096) = 3739
  --- SIGSEGV (Segmentation fault) @ 0 (0) ---
  +++ killed by SIGSEGV +++
  Segmentation fault
 
  dmesg output:
  mknetid[22013]: segfault at 0 ip 0037e3a371e2 sp 7fff19e13c80
  error 4 in libc-2.12.so[37e3a0+186000]
 
 
  It's an ugly problem and seems a simple out of bounds reading...
 
  Is it possible to solve the problem?
 
 
  Thank you,
 
  great work with SL 6.1

 --
  .-. J e a n - P a u l   C h a p u t  /  Administrateur Systeme
  /v\ jean-paul.cha...@lip6.fr
/(___)\   work: (33) 01.44.27.53.99
 ^^ ^^cell:  06.66.25.35.55   home: 01.47.46.01.31

U P M C   Universite Pierre  Marie Curie
L I P 6   Laboratoire d'Informatique de Paris VI
S o C System On Chip