Re: Fw: [LINUX-390] dasd_mod probe behavior change on SLES11?
<
Re: what is the recommand when we do partitions during installation
>>> On 11/26/2009 at 5:35 PM, Rodger Donaldson >>> wrote: -snip- > One of the nice things about the Z, of course, is that I don't care how > hard it is to repair physical servers; if my root filesystem is broken I > can simply link the disks to a working system and proceed from there. Except repairing a broken root LV isn't all that easy, even on System z, when you don't have access to the contents of /etc/lvm/. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
On Fri, November 27, 2009 07:33, Mark Post wrote: On 11/26/2009 at 1:30 PM, Scott Rohling wrote: > -snip- >> Anyway - I still say humbug. There's nothing about a non-LVM / that >> will protect you from people... > > Let me know if you still think the same way after supporting ~800 physical > servers for several years. One of the nice things about the Z, of course, is that I don't care how hard it is to repair physical servers; if my root filesystem is broken I can simply link the disks to a working system and proceed from there. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
This is a good start, but in my experience I think that / has to be LVM as well, the only partition you have to let in dasd is /boot so I recomend something like: /dev/dasda1 /boot 100M /dev/systemvg/rootlv /root so this let you increase root partition in case that some of the another directories that you don't partitioned separately goes big ;) I recommend to give explicit names to the GroupVolumes and Logical Volumes as well Like this: /dev/systemvg/rootlv / /dev/systemvg/varlv /var /dev/systemvg/optlv /opt ... /dev/datavg/db2lv /db2 /dev/ I think you got the Idea. Regards... Miguel Angel Barajas Hernandez Premium Support Engineer, CLA,CLP, CCTS, CNSP, LTS, CLE mabara...@novell.com t +52 55 52842700 f +52 55 52842799 m +52 55 39884315 Novell de México SUSE* Linux Enterprise 11 Your Linux is ready http://www.novell.com/linux >>> And Get Involved 11/26/09 3:11 PM >>> Thanks, Mark. I know you will answer my question. Even it is your thanksgiving day! I need your more education on the file system as you know I am a newbie in linux. >Based on a number of years experience with midrange systems, adjusted slightly for the mainframe, I prefer >this style setup: ># df -h >Filesystem Size Used Avail Use% Mounted on >/dev/dasda1 388M 119M 250M 33% / >/dev/vg1/home97M 4.2M 88M 5% /home >/dev/vg1/opt 74M 21M 50M 30% /opt >/dev/vg1/srv1.2G 1.1G 100M 92% /srv >/dev/vg1/tmp291M 17M 260M 6% /tmp >/dev/vg1/usr1.2G 915M 183M 84% /usr >/dev/vg1/var245M 69M 164M 30% /var I know all the other folder is above / folder. so this setting means except /home /opt /srv /tmp /usr /var other linux folders are resident on dasda1 And the size on dasda1 is fixed. Does that mean the rest of folders will not grow dramatically in the future? And If /root and /boot are the key folders to recover the system when something went wrong, Can we just put both of them or plus /etc into /dasd1 and leave the / on the LVM? Sunny :) From: Mark Post To: LINUX-390@VM.MARIST.EDU Date: 11/26/2009 10:39 AM Subject:Re: what is the recommand when we do partitions during installation Sent by:Linux on 390 Port >>> On 11/26/2009 at 11:58 AM, And Get Involved wrote: > We use sles10 on z/VM. And also use LVM. > > where we should put /boot and / ? > how large for physical volume? Should put / into physical partition? Based on a number of years experience with midrange systems, adjusted slightly for the mainframe, I prefer this style setup: # df -h Filesystem Size Used Avail Use% Mounted on /dev/dasda1 388M 119M 250M 33% / /dev/vg1/home97M 4.2M 88M 5% /home /dev/vg1/opt 74M 21M 50M 30% /opt /dev/vg1/srv1.2G 1.1G 100M 92% /srv /dev/vg1/tmp291M 17M 260M 6% /tmp /dev/vg1/usr1.2G 915M 183M 84% /usr /dev/vg1/var245M 69M 164M 30% /var Some day, this is going to be the default proposal for the SLES installer. I'm just not sure when it will get high enough on the priority list to get developer time for a release. For mainframes, there is little to no advantage having /boot be on a separate partition. The same is true of almost all modern midrange systems, but it tends to persist there from habit/tradition. I do _not_ put / into an LV. I've had enough problems trying to recover the system when something went wrong to keep punishing myself by doing that again. Note that you _will_ have a problem some day, it is just a matter of time. By having all the other file systems broken out of / I never have to worry about resizing it. Except for the contents of /root, it just doesn't grow, and I have complete control of what goes in /root. Unless things work out "just so" I usually wind up with a decent amount of unused space in the VG. This is a good thing to keep in reserve so that you can expand one or another of the LVs. So, what I do is take my first 3390-x volume, and put two partitions on it. The first is for /, and I make that about 500MB or so. You can decide how big you want it for your systems. The second is for LVM as a PV. All other DASD volumes I only put one partition on, and those are all for LVM PVs. Note that I am not talking about application/data storage space here. This is only for the operating system. The non-OS space comes from additional DASD (or SCSI) and that goes into a separate VG from the OS. > Why? See above. > I remember on one red book said put /boot on /dasda with the size 512 MB. > then the rest put into LVM. Can't find that book anymore. Is that right? 512MB for /boot is way too large for any practical purpose. If you're going to put / into an LV, I would only make /boot around 50-100MB. But, as I said, I wouldn't have / in an LV. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX
Re: what is the recommand when we do partitions during installation
Thanks, Mark. I know you will answer my question. Even it is your thanksgiving day! I need your more education on the file system as you know I am a newbie in linux. >Based on a number of years experience with midrange systems, adjusted slightly for the mainframe, I prefer >this style setup: ># df -h >Filesystem Size Used Avail Use% Mounted on >/dev/dasda1 388M 119M 250M 33% / >/dev/vg1/home97M 4.2M 88M 5% /home >/dev/vg1/opt 74M 21M 50M 30% /opt >/dev/vg1/srv1.2G 1.1G 100M 92% /srv >/dev/vg1/tmp291M 17M 260M 6% /tmp >/dev/vg1/usr1.2G 915M 183M 84% /usr >/dev/vg1/var245M 69M 164M 30% /var I know all the other folder is above / folder. so this setting means except /home /opt /srv /tmp /usr /var other linux folders are resident on dasda1 And the size on dasda1 is fixed. Does that mean the rest of folders will not grow dramatically in the future? And If /root and /boot are the key folders to recover the system when something went wrong, Can we just put both of them or plus /etc into /dasd1 and leave the / on the LVM? Sunny :) From: Mark Post To: LINUX-390@VM.MARIST.EDU Date: 11/26/2009 10:39 AM Subject:Re: what is the recommand when we do partitions during installation Sent by:Linux on 390 Port >>> On 11/26/2009 at 11:58 AM, And Get Involved wrote: > We use sles10 on z/VM. And also use LVM. > > where we should put /boot and / ? > how large for physical volume? Should put / into physical partition? Based on a number of years experience with midrange systems, adjusted slightly for the mainframe, I prefer this style setup: # df -h Filesystem Size Used Avail Use% Mounted on /dev/dasda1 388M 119M 250M 33% / /dev/vg1/home97M 4.2M 88M 5% /home /dev/vg1/opt 74M 21M 50M 30% /opt /dev/vg1/srv1.2G 1.1G 100M 92% /srv /dev/vg1/tmp291M 17M 260M 6% /tmp /dev/vg1/usr1.2G 915M 183M 84% /usr /dev/vg1/var245M 69M 164M 30% /var Some day, this is going to be the default proposal for the SLES installer. I'm just not sure when it will get high enough on the priority list to get developer time for a release. For mainframes, there is little to no advantage having /boot be on a separate partition. The same is true of almost all modern midrange systems, but it tends to persist there from habit/tradition. I do _not_ put / into an LV. I've had enough problems trying to recover the system when something went wrong to keep punishing myself by doing that again. Note that you _will_ have a problem some day, it is just a matter of time. By having all the other file systems broken out of / I never have to worry about resizing it. Except for the contents of /root, it just doesn't grow, and I have complete control of what goes in /root. Unless things work out "just so" I usually wind up with a decent amount of unused space in the VG. This is a good thing to keep in reserve so that you can expand one or another of the LVs. So, what I do is take my first 3390-x volume, and put two partitions on it. The first is for /, and I make that about 500MB or so. You can decide how big you want it for your systems. The second is for LVM as a PV. All other DASD volumes I only put one partition on, and those are all for LVM PVs. Note that I am not talking about application/data storage space here. This is only for the operating system. The non-OS space comes from additional DASD (or SCSI) and that goes into a separate VG from the OS. > Why? See above. > I remember on one red book said put /boot on /dasda with the size 512 MB. > then the rest put into LVM. Can't find that book anymore. Is that right? 512MB for /boot is way too large for any practical purpose. If you're going to put / into an LV, I would only make /boot around 50-100MB. But, as I said, I wouldn't have / in an LV. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 Scanned by WCB Webgate1 AntiSpam/AntiVirus email gateway. This message is intended only for the addressee. It may contain privileged or confidential information. Any unauthorized disclosure is strictly prohibited. If you have received this message in error, please notify us immediately so that we may correct our internal records. Please then delete the original email. Thank you. (Sent by Webgate2) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
I have and I do.. although they were/are virtual servers - not physical. And we should actually be talking about shared/RO root for this many servers instead of hundreds of separate, breakable ones if we're trying to limit the people factor.. So let's talk about this over some turkey and gravy ;-)Tell war stories and guzzle wine... Happy Thanksgiving all! Scott On Thu, Nov 26, 2009 at 11:33 AM, Mark Post wrote: > >>> On 11/26/2009 at 1:30 PM, Scott Rohling > wrote: > -snip- > > Anyway - I still say humbug. There's nothing about a non-LVM / that > will > > protect you from people... > > Let me know if you still think the same way after supporting ~800 physical > servers for several years. > > > Mark Post > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or > visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
>>> On 11/26/2009 at 1:30 PM, Scott Rohling wrote: -snip- > Anyway - I still say humbug. There's nothing about a non-LVM / that will > protect you from people... Let me know if you still think the same way after supporting ~800 physical servers for several years. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
Do all these people you speak of just affect / ? I don't understand the argument.. Yeah - mistakes get made. The really important data is probably not under the / LVM at all... it's in those other filesystems that I guess people don't affect? ;-) The only argument I'm really hearing is that recovery is harder.. and well, maybe. I've had clobbered non-LVM / disks before... I brought up a recovery system to fix what I could. That's what I did for an LVM / as well.. and since we have hundreds of these buggers they are all set up exactly the same way from an OS point of view and I didn't need the config info to know where things are or which disks might be the issue. (That's why conventions like 100-1FF for Linux OS volumes are nice -- you always know which disks make up an LVM) Anyway - I still say humbug. There's nothing about a non-LVM / that will protect you from people... Scott On Thu, Nov 26, 2009 at 11:06 AM, Mark Post wrote: > >>> On 11/26/2009 at 12:53 PM, Scott Rohling > wrote: > > I know we've had this discussion before.. but.. I fail to understand why > > everyone seems to find LVM reliable for everything BUT /. I'm promised > it > > will certainly fail - it's just a matter of time. Why?? Why does the > > reliability of LVM suddenly break down when you talk about a particular > > filesystem? I find it illogical. > > It's not illogical at all. People are people and they make mistakes. When > all your configuration information is locked away in an inaccessible LV, it > makes recovery very much harder than it would be otherwise. It's not that > LVM itself is particularly unreliable (although like any software it has > it's bugs), it's the people involved. And I'm not just talking about the > system administrator. There's also the storage admin, the fabric admin, the > storage CE, the person that accidentally tweaked the wrong fiber connector > in the switch, you name it. When you've supported nearly a thousand > physical servers, these lessons get burned into your memory. > > > The few times I've experienced issues with / being an LVM are the very > same > > issues I have with any other filesystem under an LVM .. missing disks, > > changed uuids, etc. > > Exactly. But when you went to fix the problem, was /etc/ available? > Probably. > > > I'm not especially advocating using LVM for / - although I find it has > some > > advantages. > > Given the file system layout I use, I see no advantages at all, only > disadvantages. > > > I'm just asking why it's reliability is so much in question. > > It's not in question, particularly. > > > What is there about / that makes LVM 'sure to fail'? I say humbug to > > that.. > > See above. It's the people involved. (And sometimes just Murphy/Cosmic > radiation/whatever.) > > > Mark Post > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or > visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
>>> On 11/26/2009 at 12:53 PM, Scott Rohling wrote: > I know we've had this discussion before.. but.. I fail to understand why > everyone seems to find LVM reliable for everything BUT /. I'm promised it > will certainly fail - it's just a matter of time. Why?? Why does the > reliability of LVM suddenly break down when you talk about a particular > filesystem? I find it illogical. It's not illogical at all. People are people and they make mistakes. When all your configuration information is locked away in an inaccessible LV, it makes recovery very much harder than it would be otherwise. It's not that LVM itself is particularly unreliable (although like any software it has it's bugs), it's the people involved. And I'm not just talking about the system administrator. There's also the storage admin, the fabric admin, the storage CE, the person that accidentally tweaked the wrong fiber connector in the switch, you name it. When you've supported nearly a thousand physical servers, these lessons get burned into your memory. > The few times I've experienced issues with / being an LVM are the very same > issues I have with any other filesystem under an LVM .. missing disks, > changed uuids, etc. Exactly. But when you went to fix the problem, was /etc/ available? Probably. > I'm not especially advocating using LVM for / - although I find it has some > advantages. Given the file system layout I use, I see no advantages at all, only disadvantages. > I'm just asking why it's reliability is so much in question. It's not in question, particularly. > What is there about / that makes LVM 'sure to fail'? I say humbug to > that.. See above. It's the people involved. (And sometimes just Murphy/Cosmic radiation/whatever.) Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
I know we've had this discussion before.. but.. I fail to understand why everyone seems to find LVM reliable for everything BUT /. I'm promised it will certainly fail - it's just a matter of time. Why?? Why does the reliability of LVM suddenly break down when you talk about a particular filesystem? I find it illogical. The few times I've experienced issues with / being an LVM are the very same issues I have with any other filesystem under an LVM .. missing disks, changed uuids, etc. I'm not especially advocating using LVM for / - although I find it has some advantages. I'm just asking why it's reliability is so much in question. What is there about / that makes LVM 'sure to fail'? I say humbug to that.. (oh yeah - it's thanksgiving - time to get the turkey in the oven) Scott p.s. I'll say this -- if you do put / under an LVM - have a bootable Linux disk you use for recovery around that doesn't use LVM at all (avoid vg name conflicts). Or at least be prepared to boot the install kernel.. That's the only difference I see in using / under an LVM .. recovery may not be as simple.. but the concepts for correcting LVM issues are the same. On Thu, Nov 26, 2009 at 10:38 AM, Mark Post wrote: > >>> On 11/26/2009 at 11:58 AM, And Get Involved > wrote: > > We use sles10 on z/VM. And also use LVM. > > > > where we should put /boot and / ? > > how large for physical volume? Should put / into physical partition? > > Based on a number of years experience with midrange systems, adjusted > slightly for the mainframe, I prefer this style setup: > # df -h > > Filesystem Size Used Avail Use% Mounted on > /dev/dasda1 388M 119M 250M 33% / > /dev/vg1/home97M 4.2M 88M 5% /home > /dev/vg1/opt 74M 21M 50M 30% /opt > /dev/vg1/srv1.2G 1.1G 100M 92% /srv > /dev/vg1/tmp291M 17M 260M 6% /tmp > /dev/vg1/usr1.2G 915M 183M 84% /usr > /dev/vg1/var245M 69M 164M 30% /var > > Some day, this is going to be the default proposal for the SLES installer. > I'm just not sure when it will get high enough on the priority list to get > developer time for a release. > > For mainframes, there is little to no advantage having /boot be on a > separate partition. The same is true of almost all modern midrange systems, > but it tends to persist there from habit/tradition. > > I do _not_ put / into an LV. I've had enough problems trying to recover > the system when something went wrong to keep punishing myself by doing that > again. Note that you _will_ have a problem some day, it is just a matter of > time. By having all the other file systems broken out of / I never have to > worry about resizing it. Except for the contents of /root, it just doesn't > grow, and I have complete control of what goes in /root. Unless things work > out "just so" I usually wind up with a decent amount of unused space in the > VG. This is a good thing to keep in reserve so that you can expand one or > another of the LVs. > > So, what I do is take my first 3390-x volume, and put two partitions on it. > The first is for /, and I make that about 500MB or so. You can decide how > big you want it for your systems. The second is for LVM as a PV. All other > DASD volumes I only put one partition on, and those are all for LVM PVs. > Note that I am not talking about application/data storage space here. This > is only for the operating system. The non-OS space comes from additional > DASD (or SCSI) and that goes into a separate VG from the OS. > > > Why? > > See above. > > > I remember on one red book said put /boot on /dasda with the size 512 MB. > > then the rest put into LVM. Can't find that book anymore. Is that right? > > 512MB for /boot is way too large for any practical purpose. If you're > going to put / into an LV, I would only make /boot around 50-100MB. But, as > I said, I wouldn't have / in an LV. > > > Mark Post > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or > visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: what is the recommand when we do partitions during installation
>>> On 11/26/2009 at 11:58 AM, And Get Involved wrote: > We use sles10 on z/VM. And also use LVM. > > where we should put /boot and / ? > how large for physical volume? Should put / into physical partition? Based on a number of years experience with midrange systems, adjusted slightly for the mainframe, I prefer this style setup: # df -h Filesystem Size Used Avail Use% Mounted on /dev/dasda1 388M 119M 250M 33% / /dev/vg1/home97M 4.2M 88M 5% /home /dev/vg1/opt 74M 21M 50M 30% /opt /dev/vg1/srv1.2G 1.1G 100M 92% /srv /dev/vg1/tmp291M 17M 260M 6% /tmp /dev/vg1/usr1.2G 915M 183M 84% /usr /dev/vg1/var245M 69M 164M 30% /var Some day, this is going to be the default proposal for the SLES installer. I'm just not sure when it will get high enough on the priority list to get developer time for a release. For mainframes, there is little to no advantage having /boot be on a separate partition. The same is true of almost all modern midrange systems, but it tends to persist there from habit/tradition. I do _not_ put / into an LV. I've had enough problems trying to recover the system when something went wrong to keep punishing myself by doing that again. Note that you _will_ have a problem some day, it is just a matter of time. By having all the other file systems broken out of / I never have to worry about resizing it. Except for the contents of /root, it just doesn't grow, and I have complete control of what goes in /root. Unless things work out "just so" I usually wind up with a decent amount of unused space in the VG. This is a good thing to keep in reserve so that you can expand one or another of the LVs. So, what I do is take my first 3390-x volume, and put two partitions on it. The first is for /, and I make that about 500MB or so. You can decide how big you want it for your systems. The second is for LVM as a PV. All other DASD volumes I only put one partition on, and those are all for LVM PVs. Note that I am not talking about application/data storage space here. This is only for the operating system. The non-OS space comes from additional DASD (or SCSI) and that goes into a separate VG from the OS. > Why? See above. > I remember on one red book said put /boot on /dasda with the size 512 MB. > then the rest put into LVM. Can't find that book anymore. Is that right? 512MB for /boot is way too large for any practical purpose. If you're going to put / into an LV, I would only make /boot around 50-100MB. But, as I said, I wouldn't have / in an LV. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
what is the recommand when we do partitions during installation
We use sles10 on z/VM. And also use LVM. where we should put /boot and / ? how large for physical volume? Should put / into physical partition? Why? I remember on one red book said put /boot on /dasda with the size 512 MB. then the rest put into LVM. Can't find that book anymore. Is that right? Thanks! Sunny This message is intended only for the addressee. It may contain privileged or confidential information. Any unauthorized disclosure is strictly prohibited. If you have received this message in error, please notify us immediately so that we may correct our internal records. Please then delete the original email. Thank you. (Sent by Webgate2) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
sles11 install question
Hi, when installing SLES via NFS I used to point parmfile directly to .iso image of SLES DVD, without loopmounting it. But on SLES11 I am getting an error. Loading file:/mounts/mp_/SLES-11-DVD-s390x-GM-DVD1.iso - failed *** Could not find the SUSE Linux Enterprise Server 11 Repository. Activating manual setup program. And then manual setup starts. I checked documentation, this possibility is not mentioned there, they use iso loop-mounted on NFS. Any idea if this feature was dropped or I do something wrong ? Thank you === Marian Gasparovic === "The mere thought hadn't even begun to speculate about the merest possibility of crossing my mind." -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: sockd and Too many open files (errno = 24)
Hi Jonathan, here are some info about file limits : lsox03:/var/log # cat /proc/sys/fs/file-nr 30730 104857 lsox03:/var/log # cat /proc/sys/fs/file-max 104857 lsox03:/var/log # lsof | grep sockd | wc -l 4705 -Number of session request today (from 00:00 at now): lsox03:/var/log # cat /var/log/sockd.log | grep 'pass(1)' | wc -l 60201 -Sockd active process: lsox03:/var/log # ps -ef | grep sockd | wc -l 167 Cordiali saluti / Best regards Marco Bosisio From: "Quay, Jonathan (IHG)" To: LINUX-390@VM.MARIST.EDU Date: 26/11/2009 15.36 Subject: Re: sockd and Too many open files (errno = 24) Sent by: Linux on 390 Port Is the system out of sockets/open file descriptors? From: Linux on 390 Port on behalf of Marco Bosisio Sent: Thu 11/26/2009 8:50 AM To: LINUX-390@VM.MARIST.EDU Subject: sockd and Too many open files (errno = 24) Hello, the sockd deamon (of dante-1.1.19-1 server http://www.inet.no/dante/ ) running on Linux SLES9 64b SP4 have the following problem : . Nov 26 09:52:48 (1259225568) sockd[2067]: addchild(): Too many open files (errno = 24) Nov 26 09:52:54 (1259225574) sockd[2067]: can't accept new clients, no free negotiate slots:Too many open files (errno = 24) This service is very used and it is increased this month : for example searching in log "pass(1)" this key it appear155799 for a day So I changed ulimits for sockd user in : lsox03:~ # grep sockd /etc/security/limits.conf ## changed for Too many open files (errno = 24)" sockdhardnofile 3 sockdsoftnofile 3 and it has been applied from system : lsox03:~ # su - sockd -c 'ulimit -a | grep open ' open files(-n) 3 The system usage it seems normal : lsox03:~ # free total used free sharedbuffers cached Mem: 1020040 487352 532688 0 90236 190984 -/+ buffers/cache: 206132 813908 Swap: 600376 0 600376 The problem persist I have to restart almost one time for day the sockd service (or reboot sys) Any suggestion is welcome ... Thanks in advance Cordiali saluti / Best regards Marco Bosisio IBM Italia S.p.A. Sede Legale: Circonvallazione Idroscalo - 20090 Segrate (MI) Cap. Soc. euro 384.506.359,00 C. F. e Reg. Imprese MI 01442240030 - Partita IVA 10914660153 Società soggetta all?attività di direzione e coordinamento di International Business Machines Corporation (Salvo che sia diversamente indicato sopra / Unless stated otherwise above) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 IBM Italia S.p.A. Sede Legale: Circonvallazione Idroscalo - 20090 Segrate (MI) Cap. Soc. euro 384.506.359,00 C. F. e Reg. Imprese MI 01442240030 - Partita IVA 10914660153 Società soggetta all?attività di direzione e coordinamento di International Business Machines Corporation (Salvo che sia diversamente indicato sopra / Unless stated otherwise above) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: sockd and Too many open files (errno = 24)
Is the system out of sockets/open file descriptors? From: Linux on 390 Port on behalf of Marco Bosisio Sent: Thu 11/26/2009 8:50 AM To: LINUX-390@VM.MARIST.EDU Subject: sockd and Too many open files (errno = 24) Hello, the sockd deamon (of dante-1.1.19-1 server http://www.inet.no/dante/ ) running on Linux SLES9 64b SP4 have the following problem : . Nov 26 09:52:48 (1259225568) sockd[2067]: addchild(): Too many open files (errno = 24) Nov 26 09:52:54 (1259225574) sockd[2067]: can't accept new clients, no free negotiate slots:Too many open files (errno = 24) This service is very used and it is increased this month : for example searching in log "pass(1)" this key it appear155799 for a day So I changed ulimits for sockd user in : lsox03:~ # grep sockd /etc/security/limits.conf ## changed for Too many open files (errno = 24)" sockdhardnofile 3 sockdsoftnofile 3 and it has been applied from system : lsox03:~ # su - sockd -c 'ulimit -a | grep open ' open files(-n) 3 The system usage it seems normal : lsox03:~ # free total used free sharedbuffers cached Mem: 1020040 487352 532688 0 90236 190984 -/+ buffers/cache: 206132 813908 Swap: 600376 0 600376 The problem persist I have to restart almost one time for day the sockd service (or reboot sys) Any suggestion is welcome ... Thanks in advance Cordiali saluti / Best regards Marco Bosisio IBM Italia S.p.A. Sede Legale: Circonvallazione Idroscalo - 20090 Segrate (MI) Cap. Soc. euro 384.506.359,00 C. F. e Reg. Imprese MI 01442240030 - Partita IVA 10914660153 Società soggetta all?attività di direzione e coordinamento di International Business Machines Corporation (Salvo che sia diversamente indicato sopra / Unless stated otherwise above) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
sockd and Too many open files (errno = 24)
Hello, the sockd deamon (of dante-1.1.19-1 server http://www.inet.no/dante/ ) running on Linux SLES9 64b SP4 have the following problem : . Nov 26 09:52:48 (1259225568) sockd[2067]: addchild(): Too many open files (errno = 24) Nov 26 09:52:54 (1259225574) sockd[2067]: can't accept new clients, no free negotiate slots:Too many open files (errno = 24) This service is very used and it is increased this month : for example searching in log "pass(1)" this key it appear155799 for a day So I changed ulimits for sockd user in : lsox03:~ # grep sockd /etc/security/limits.conf ## changed for Too many open files (errno = 24)" sockdhardnofile 3 sockdsoftnofile 3 and it has been applied from system : lsox03:~ # su - sockd -c 'ulimit -a | grep open ' open files(-n) 3 The system usage it seems normal : lsox03:~ # free total used free sharedbuffers cached Mem: 1020040 487352 532688 0 90236 190984 -/+ buffers/cache: 206132 813908 Swap: 600376 0 600376 The problem persist I have to restart almost one time for day the sockd service (or reboot sys) Any suggestion is welcome ... Thanks in advance Cordiali saluti / Best regards Marco Bosisio IBM Italia S.p.A. Sede Legale: Circonvallazione Idroscalo - 20090 Segrate (MI) Cap. Soc. euro 384.506.359,00 C. F. e Reg. Imprese MI 01442240030 - Partita IVA 10914660153 Società soggetta all?attività di direzione e coordinamento di International Business Machines Corporation (Salvo che sia diversamente indicato sopra / Unless stated otherwise above) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390