Re: reject non-enlish email body messages
On Thu, May 27, 2004 at 01:45:46AM +0200, Arnt Karlsen wrote: ..how far do we wanna push it; reject all html, attachments, weird wintendoencoding, non-english charactersets? ..will allowing allowing ascii, iso8859-1 and utf-8 and rejecting everything else be tight enough? I'd think the following might be worthy of consideration for enforcement: - Do not send spam - Send all of your e-mails in English. Only use other languages on mailing lists where that is explicitly allowed (e.g. French on debian-user-french). - Make sure that you are using the proper list. In particular, don't send user-related questions to developer-related mailing lists. - Wrap your lines at 80 characters or less for ordinary discussion. Lines longer than 80 characters are acceptable for computer-generated output (e.g., ls -l). - Do not send automated out-of-office or vacation messages. - Do not send subscription or unsubscription requests to the list address itself; use the respective -request address instead. - Never send your messages in HTML; use plain text instead. - Avoid sending large attachments. - When replying to messages on the mailing list, do not send a carbon copy (CC) to the original poster unless they explicitly request to be copied. - If you send messages to lists to which you are not subscribed, always note that fact in the body of your message. - Do not use foul language; besides, some people receive the lists via packet radio, where swearing is illegal. - Try not to flame; it is not polite. Taken from the mailing list Code of Conduct posted at : http://www.debian.org/MailingLists/ HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == pgpPAlvTL5nWX.pgp Description: PGP signature
Re: reject non-enlish email body messages
On Thu, May 27, 2004 at 01:45:46AM +0200, Arnt Karlsen wrote: ..how far do we wanna push it; reject all html, attachments, weird wintendoencoding, non-english charactersets? ..will allowing allowing ascii, iso8859-1 and utf-8 and rejecting everything else be tight enough? I'd think the following might be worthy of consideration for enforcement: - Do not send spam - Send all of your e-mails in English. Only use other languages on mailing lists where that is explicitly allowed (e.g. French on debian-user-french). - Make sure that you are using the proper list. In particular, don't send user-related questions to developer-related mailing lists. - Wrap your lines at 80 characters or less for ordinary discussion. Lines longer than 80 characters are acceptable for computer-generated output (e.g., ls -l). - Do not send automated out-of-office or vacation messages. - Do not send subscription or unsubscription requests to the list address itself; use the respective -request address instead. - Never send your messages in HTML; use plain text instead. - Avoid sending large attachments. - When replying to messages on the mailing list, do not send a carbon copy (CC) to the original poster unless they explicitly request to be copied. - If you send messages to lists to which you are not subscribed, always note that fact in the body of your message. - Do not use foul language; besides, some people receive the lists via packet radio, where swearing is illegal. - Try not to flame; it is not polite. Taken from the mailing list Code of Conduct posted at : http://www.debian.org/MailingLists/ HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == pgpBe74xTjJCc.pgp Description: PGP signature
Re: high performance, highly available web clusters
On Wed, May 19, 2004 at 11:48:31PM -0600, David Wilk wrote: ... The cluster is comprised of a load-balancer, several web servers connected to a redundant pair of NFS servers and a redundant pair of MySQL servers. The current bottle-neck is, of course, the NFS servers. However, the entire thing needs an increase in capacity by several times. ... The expensive option would be to add a high-performance SAN which would do the trick for all of the servers that required high-performance shared storage. this would solve the NFS performance problems. However, for alot less money, one could simply do away with the file server entirely. Since this is static content, one could keep these files locally on the webservers and push the content out from a central server via rsync. I figure a pair of redundant internal web server 'staging servers' could be used for content update. Once tested, the update could be pushed to the production servers with a script using rsync and ssh. Each server, would of course, require fast and redundant disk subsystems. I think the lowest cost option is to increase the number of image servers, beef up the NFS servers and MySQL servers and add to the number of web servers in the cluster. This doesn't really solve the design problem, though. Personally, I can't see the sense in replacing a set of NFS servers with individual disks. While you might save money going with local disks in the short run your maintenance costs (moreso the time cost than dollar cost) would increase accordingly. Just dealing with lots of extra moving parts puts a shiver down my spine. I'm not sure how your 'static content' fits in with your mentioning multiple MySQL servers, that seems dynamic to me - or at least, ability for much dynamic content. If you ARE serving up a lot of static content, I might recommend a situation that's similar to a project I worked on for a $FAMOUSAUTHOR where we designed multiple web servers behind a pair of L4 switches. The pair of switches (pair for redundancy) load balanced for us and we ran THTTPD on the servers. There were a few links to offsite content, where content hosting providers (cannot remember the first, but they later went with Akamai) offered up the larger file people came to download. Over the millions of hits we got, it survived quite nicely. We ran out of bandwidth (50Mb/s) before the servers even blinked. Perhaps if it IS static you might also consider loading your content into a RAMdisk, which would provide probably the fastest access time. I might consider such a thing these days with the dirt cheap pricing of RAM. I think some kind of common disk (NFS, whatever, on RAID) is your best solution. HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: high performance, highly available web clusters
On Wed, May 19, 2004 at 11:48:31PM -0600, David Wilk wrote: ... The cluster is comprised of a load-balancer, several web servers connected to a redundant pair of NFS servers and a redundant pair of MySQL servers. The current bottle-neck is, of course, the NFS servers. However, the entire thing needs an increase in capacity by several times. ... The expensive option would be to add a high-performance SAN which would do the trick for all of the servers that required high-performance shared storage. this would solve the NFS performance problems. However, for alot less money, one could simply do away with the file server entirely. Since this is static content, one could keep these files locally on the webservers and push the content out from a central server via rsync. I figure a pair of redundant internal web server 'staging servers' could be used for content update. Once tested, the update could be pushed to the production servers with a script using rsync and ssh. Each server, would of course, require fast and redundant disk subsystems. I think the lowest cost option is to increase the number of image servers, beef up the NFS servers and MySQL servers and add to the number of web servers in the cluster. This doesn't really solve the design problem, though. Personally, I can't see the sense in replacing a set of NFS servers with individual disks. While you might save money going with local disks in the short run your maintenance costs (moreso the time cost than dollar cost) would increase accordingly. Just dealing with lots of extra moving parts puts a shiver down my spine. I'm not sure how your 'static content' fits in with your mentioning multiple MySQL servers, that seems dynamic to me - or at least, ability for much dynamic content. If you ARE serving up a lot of static content, I might recommend a situation that's similar to a project I worked on for a $FAMOUSAUTHOR where we designed multiple web servers behind a pair of L4 switches. The pair of switches (pair for redundancy) load balanced for us and we ran THTTPD on the servers. There were a few links to offsite content, where content hosting providers (cannot remember the first, but they later went with Akamai) offered up the larger file people came to download. Over the millions of hits we got, it survived quite nicely. We ran out of bandwidth (50Mb/s) before the servers even blinked. Perhaps if it IS static you might also consider loading your content into a RAMdisk, which would provide probably the fastest access time. I might consider such a thing these days with the dirt cheap pricing of RAM. I think some kind of common disk (NFS, whatever, on RAID) is your best solution. HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + ==
Re: You can start saving now
Look!! A dead horse!!! *whack whack whack* j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + ==
Re: Mondo and Debian
On Sun, Feb 22, 2004 at 04:14:29AM -0500, Christopher Davis wrote: I've been switching from Red Hat to Debian the last 6 months and have become very partial to Mondo Rescue -- mondorescue.org for backups. This and Debian do not seem to like each other too much What types of software do you use to run backups on Debian servers to create iso bootable images? or...Even better -- any one know how to tweak Debian and Mondo? I've used amanda. Both on a disk and tape system. Works fine for me. j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == pgp0.pgp Description: PGP signature
Re: qmail or postfix? (was: RE: What is the best mailling list manager for qmail and Domain Tech. Control ?)
On Thu, Feb 19, 2004 at 11:22:54PM +0100, Adrian 'Dagurashibanipal' von Bidder wrote: I take this to mean that there are no binaries to download from postifx.org itself - all binaries are made by integrators/vendors. This does not mean that making binaries is not allowed. Binaries are, indeed, released through vendors. See http://www.postfix.org/packages.html for a listing of various links to packages of postfix. The postfix.org website doesn't have the packages, but links to them all. According to the mirrors, Things are done according to the IBM public license, http://getmyip.com/mirror/pub/LICENSE Read the IBM public license and take it from there. Hope this might help clear up any licensing/packaging issues with postfix. Sorry, I cannot comment as to the status of qmail, since I have chosen to use postfix instead. j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == pgp0.pgp Description: PGP signature
Re: What is the best mailling list manager for qmail and Domain Tech. Control ?
On Mon, Feb 16, 2004 at 07:17:57AM +0100, Thomas GOIRAND wrote: I wish to implement mailling list management to my software for all virtual domains. DTC uses qmail, so it has to be compatible with it. DTC will generate all config file for the given mailling list manager. Ecartis (formerly known as listar) works pretty well for me, but the documentation for it is _still_ woefully inferior. There is a mailing list archive one can search though, which tends to make up for some of the documentations shortcomings. It's available in debian/stable. I use it with Postfix, but I'm pretty sure it works with Qmail. I'd be interested to see your software offer different choices to people for MTA - that might be a nice option. Perhaps someone will pick up on it from the Postifx world and create such a thing. HTH j
Re: Remote server management
I've used BayTech hardware for several years now with good success. IP accessible console (serial) and remote power control. They've saved many a trip to a remote location. There's a series of different ones with different abilities. baytechdcd.com is their website. Again, not sure about BIOS control, but that'd be a concern of the motherboard itself as well as any remote access. Hope this helps. j On Fri, Feb 06, 2004 at 05:26:09PM -0600, Micah Anderson wrote: Since we often have limited physical access to our machines, and our collective members are spread around the country, our holy grail is remote hardware administration. This could mean a lot of things. Mostly, we just need to: 1. power cycle computers remotely 2. access the bios and boot menu remotely This allows us to reboot if the machine crashes, boot from a different drive if the boot drive is toast, and allows people to pretty much install a complex system remotely (especially if we leave a rescue cd in the drive). Ever tried installing an LVM or software RAID or firewall remotely? It can be dicey! Access over IP is acceptable. In other words, we do not need a solution which is completely 'out of band' like a modem or radio link. Below are some notes on the research we have done. Any stories, experiences, or advice with this kind of stuff would be greatly appreciated. * Motherboards * Many motherboards support serial console (or 'console redirection'). This allows you to use the 'serial console buddy system' or terminal server to access the machine's main console and bios. With linux, you can access the console after the boot process has started, but doesn't get you very far so hardware support in the motherboard is also needed. In the past, we have had frustration with the quirks of serial console support (like it killing the real console). Boards which typically have serial console (serial redirection) support: Tyan http://tyan.com Supermicro http://supermicro.com Others ... * KVM over IP * These boxes convert the keyboard, video, and mouse to digital and route over an IP network. Wild stuff. Traditionally very expensive, newer products are making this affordable. American Megatrends has a new one supposedly available Q1 2004 which is super tiny, can support unlimited machines (when connected to a KVM), with an anticipated list price of $600. http://www.ami.com/kvm/. I think some you can ctr-alt-del over and some not(?). * Serial Console Buddy System * The idea is to have machines in pairs or more, connected to a partner's serial port. If one goes down, connect to it from the one which is (hopefully) still alive. You can use two serial cables for this, or one if you are tricky. It is sometimes difficult to find null modem cables with the correct pinout for serial consoles to work. * PCI Cards * Cards which add remote support to a motherboard without it: PC Weasel pumps video and keyboard through a serial port. needs an async terminal server, a buddy, or modem(?), to be truly remote includes remote reboot too. $250 for ISA $350 for PCI MegaRac G2 Lite (american megatrends) Serial over LAN, power control, remote bios. OS independent, no drivers. BIOS independent. client: web based ui (SSL) platform independent. Mostly intended for monitoring hardware through I2C or IPMI. Unsure about how robust the serial over lan is. $300, not available yet, but soon. * Terminal Server/Serial Concentrators * Not sure if there is a difference (or a similarity!) A hub for serial lines, so if you had a bunch of machines with serial consoles they could all be controlled in one place. pricey! some can route through ip(?), or to another machine, or a modem. * Real Servers * Real servers, unlike the commodity stuff we use, have had serial console support since the beginning of time: Alphas, NetServers, etc. People on lists sometimes say they often buy this stuff without a video card at all and just use the serial console (through a terminal server). In addition to serial console, you can buy used on ebay for under $40 stuff like the HP P1218A Netserver Remote Control Interface which lets you reboot the system, flash the bios, and reconfigure hardware remotely. * Remote Reboot * Typically is has been pretty expensive to have a power strip which can be controlled remotely. Here are some affordable options: http://www.webreboot.net/ sells a little box for $250 that can connect to 8 machines through the reset connector on the motherboard. reboot from a web browser. http://www.wti.com/power.htm sells power strips which can be rebooted from a web browser ($600 for 5 plugs) or a control unit + satellite units setup ($350 for control unit + $200 per satellite). -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] --
Re: Hardware for massive DVD writing
On Thu, Feb 05, 2004 at 11:01:22PM +0100, Joaquin Ferrero wrote: Hi. A customer will need to burn 50Gb daily to DVDs (satellite imaginery products). All discs have different contents. We need a juke box with space to store virgin disk and burned disk... many discs... for automatic writing. I looked to: http://www.daxarchiving.com/ but i need more options... More options: Tapes. multiple DVD drives in one machine multiple SCSI DVD drives in one machine write to firewire IDE drive (this is truly becoming an easy portable solution - firewire or USB 2.0 drives are as cheap as some tape media!) Not know why daxarchiving.com is not suitable kind of leaves us at a disadvantage as to what requirements you have that are not met by that solution. HTH j -- === Build me an army worthy of... waterville? http://www.kingsofchaos.com/recruit.php?uniqid=4phk9i48 === -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Hardware for massive DVD writing
On Thu, Feb 05, 2004 at 11:01:22PM +0100, Joaquin Ferrero wrote: Hi. A customer will need to burn 50Gb daily to DVDs (satellite imaginery products). All discs have different contents. We need a juke box with space to store virgin disk and burned disk... many discs... for automatic writing. I looked to: http://www.daxarchiving.com/ but i need more options... More options: Tapes. multiple DVD drives in one machine multiple SCSI DVD drives in one machine write to firewire IDE drive (this is truly becoming an easy portable solution - firewire or USB 2.0 drives are as cheap as some tape media!) Not know why daxarchiving.com is not suitable kind of leaves us at a disadvantage as to what requirements you have that are not met by that solution. HTH j -- === Build me an army worthy of... waterville? http://www.kingsofchaos.com/recruit.php?uniqid=4phk9i48 ===
Re: IMail GUI equivalent for Linux?
On Tue, Dec 02, 2003 at 02:57:11PM +0100, Steen Suder, privat wrote: In the process of setting a little web mail-hotel up, I've stumbled over a user-suggestion that calls for a webinterface for handling emailaddresses, forwards, spamfiltering and so on that is similar to the UI of the IMail Server (http://www.ipswitch.com/Products/IMail_Server/). The MTA will probably be either Exim or Postfix as I've no interest in Qmail. Local delivery is handled by what is necessary and reliable (Courier, Cyrus, whatever). Users email access is POP3 and, secondarily, IMAP (for a few VIP-customers). I was recently lookin at http://webcp.can-host.com/ for some inspiration. It looks like there isn't a complete open-source web/email control panel yet. Oddly enough, I'm working on the same thing, actually a second generation of the same thing. We've got one in place at $COMPANY1 right now, but the interface sucks and the billing isn't integrated with the provisioning. So version 2.0 will do all that together AND make coffee for us! :)Version 2.0 will be put online at $COMPANY2 and possibly be packaged up and released open source. We have not yet decided. If you come across anything else that looks useful to you, please feel free to let the list know, so we can also check it out. If we release our version publicly, I'll be sure to let people know via this list (as it'll be packaged in .deb format only - at least by us - someone else can make RPM's :) HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: IMail GUI equivalent for Linux?
On Tue, Dec 02, 2003 at 02:57:11PM +0100, Steen Suder, privat wrote: In the process of setting a little web mail-hotel up, I've stumbled over a user-suggestion that calls for a webinterface for handling emailaddresses, forwards, spamfiltering and so on that is similar to the UI of the IMail Server (http://www.ipswitch.com/Products/IMail_Server/). The MTA will probably be either Exim or Postfix as I've no interest in Qmail. Local delivery is handled by what is necessary and reliable (Courier, Cyrus, whatever). Users email access is POP3 and, secondarily, IMAP (for a few VIP-customers). I was recently lookin at http://webcp.can-host.com/ for some inspiration. It looks like there isn't a complete open-source web/email control panel yet. Oddly enough, I'm working on the same thing, actually a second generation of the same thing. We've got one in place at $COMPANY1 right now, but the interface sucks and the billing isn't integrated with the provisioning. So version 2.0 will do all that together AND make coffee for us! :)Version 2.0 will be put online at $COMPANY2 and possibly be packaged up and released open source. We have not yet decided. If you come across anything else that looks useful to you, please feel free to let the list know, so we can also check it out. If we release our version publicly, I'll be sure to let people know via this list (as it'll be packaged in .deb format only - at least by us - someone else can make RPM's :) HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + ==
Re: command logging
On Wed, Oct 29, 2003 at 05:49:49PM +0200, ? ? wrote: ?? ??, 2003-10-29 ? 07:11, John Keimel ??: What if the user compiles zsh (or there is something similar) and uses it? Or finds a way that doesn't use bash to execute his commands? I've thought of doing something like this in the ssh server, but edned implementing it in the ssh client, because of the requirements... Yes, they could, but some of the things I'm looking for are tarballs of other shells. The vast majority of the users are non-sophisticates when it comes to the shell and it's not common knowledge that I log every command. There's a warning on login that we reserve the right to log... to cover ourselves (i.e. covers the 'notify person of monitoring' requirement legally). It's not a foolproof system, but it's better than nothing. We also had a pcsh version as well. j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == pgp0.pgp Description: PGP signature
Re: command logging
On Wed, Oct 29, 2003 at 05:49:49PM +0200, ? ? wrote: ?? ??, 2003-10-29 ? 07:11, John Keimel ??: What if the user compiles zsh (or there is something similar) and uses it? Or finds a way that doesn't use bash to execute his commands? I've thought of doing something like this in the ssh server, but edned implementing it in the ssh client, because of the requirements... Yes, they could, but some of the things I'm looking for are tarballs of other shells. The vast majority of the users are non-sophisticates when it comes to the shell and it's not common knowledge that I log every command. There's a warning on login that we reserve the right to log... to cover ourselves (i.e. covers the 'notify person of monitoring' requirement legally). It's not a foolproof system, but it's better than nothing. We also had a pcsh version as well. j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == pgpVtP2XHQjCi.pgp Description: PGP signature
Re: command logging
On Tue, Oct 28, 2003 at 10:56:53PM -0500, Dan MacNeil wrote: For a box that will have limited shell access, I'm looking for something that will log all commands. The sudo log is nice but not everything is run through sudo. There won't be many privacy issues as most users won't have shell. The goal is to review a daily report for anything unexpected: stuff like: tar -xzf rootkit.tar.gz For several servers I maintain we took the bash code and hacked it to log all commands, with usernames, to a log file. Yes, it's nosy. It's actually called 'nosy bash' by us. It's not been sent to the bash maintainers at all yet, but I could see if my coder can make a diff of it. It's come in quite handy at times. Quite handy. I didn't do that! Well, yes, you did. At 1:43:00 you type 'rm -rf /' No I didn't Yes, see, it's in the logs. Oh.. ummm... disable account Bu bye. I regualrly grep the log for keywords or sometimes tail it if I'm suspicious of someone. But for the most part, I don't ogle it constantly. Who has time for that? I'm also running grsec patches as well. Grsec didn't do the nosy bash like I wanted, so I'm keepign the nosy bash. j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: command logging
On Tue, Oct 28, 2003 at 10:56:53PM -0500, Dan MacNeil wrote: For a box that will have limited shell access, I'm looking for something that will log all commands. The sudo log is nice but not everything is run through sudo. There won't be many privacy issues as most users won't have shell. The goal is to review a daily report for anything unexpected: stuff like: tar -xzf rootkit.tar.gz For several servers I maintain we took the bash code and hacked it to log all commands, with usernames, to a log file. Yes, it's nosy. It's actually called 'nosy bash' by us. It's not been sent to the bash maintainers at all yet, but I could see if my coder can make a diff of it. It's come in quite handy at times. Quite handy. I didn't do that! Well, yes, you did. At 1:43:00 you type 'rm -rf /' No I didn't Yes, see, it's in the logs. Oh.. ummm... disable account Bu bye. I regualrly grep the log for keywords or sometimes tail it if I'm suspicious of someone. But for the most part, I don't ogle it constantly. Who has time for that? I'm also running grsec patches as well. Grsec didn't do the nosy bash like I wanted, so I'm keepign the nosy bash. j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + ==
Re: Cat 3 cabling
On Fri, Oct 24, 2003 at 03:27:32AM +0800, Jason Lim wrote: Any way to turn Cat 5 into Cat 3, and vice versa? 5 into 3? Easy. Treat it like CAT3. ;) Bend it under 1 radius. Pull it with more than 25# force (25? Not sure). Run it more than 100meters. Leave it in your trunk while it's 90 degrees outside. In other words, exceed the CAT5 spec and you have something equivalent to CAT3 left. (yes, there is some leeway, YMMV) Turn cat3 into cat5? No, can't. It's all in the twist ;) You can't unsheath it, retwist it and resheath it. Nope. Be nice to your CAT5 ;) j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Cat 3 cabling
On Fri, Oct 24, 2003 at 03:27:32AM +0800, Jason Lim wrote: Any way to turn Cat 5 into Cat 3, and vice versa? 5 into 3? Easy. Treat it like CAT3. ;) Bend it under 1 radius. Pull it with more than 25# force (25? Not sure). Run it more than 100meters. Leave it in your trunk while it's 90 degrees outside. In other words, exceed the CAT5 spec and you have something equivalent to CAT3 left. (yes, there is some leeway, YMMV) Turn cat3 into cat5? No, can't. It's all in the twist ;) You can't unsheath it, retwist it and resheath it. Nope. Be nice to your CAT5 ;) j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + ==
Re: Apache clustering w/ load balancing and failover
On Wed, Sep 17, 2003 at 02:00:35PM +0100, Shri Shrikumar wrote: Looking at the documentation for LVS, it mentions that it needs two nodes, a primary node and a backup node which then feeds into n real servers. We're using a single LVS server to balance things out to 4 webserver, 2 POP mail and 2 SMTP mail servers. Actually, it's 3 webservers right now, as a hardware failure required us to steal a webserver for 'other uses' ;) All of the servers behind the LVS are netbooting from an NFS machine. This sucks because we have a single point of failure (LVS) but the intent is to get a second eLViS (hehe) running with heartbeat between the two. It's on the network map ;) So you can run it with a single LVS, but I wouldn't prefer to. Since it's simply redirecting stuff, it doesnt' need to be that powerful. j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Visitor based netoworking
On Mon, Jun 09, 2003 at 03:36:22PM -0500, Alex (LEX) Borges wrote: I know this is doable by hand, but im wondering if anyone knows of a cool set of scripts or something for visitor based netoworking (something like dhcp+cbq+iptables to control whos accesing what and to allow acces to a network where you should on a time basis...etc. Think hotels with eth access or airports with wifi) I've used the following two solutions personally: - Nomadix gateway. Great product, if the customer has money, use it. - NoCatAuth - perhaps something that you're looking for. It's open, so you can change it how you want. Dynamic resets of either iptables or ipchains (whichever you have, it'll setup for) on authentication by users. HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + == -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Visitor based netoworking
On Mon, Jun 09, 2003 at 03:36:22PM -0500, Alex (LEX) Borges wrote: I know this is doable by hand, but im wondering if anyone knows of a cool set of scripts or something for visitor based netoworking (something like dhcp+cbq+iptables to control whos accesing what and to allow acces to a network where you should on a time basis...etc. Think hotels with eth access or airports with wifi) I've used the following two solutions personally: - Nomadix gateway. Great product, if the customer has money, use it. - NoCatAuth - perhaps something that you're looking for. It's open, so you can change it how you want. Dynamic resets of either iptables or ipchains (whichever you have, it'll setup for) on authentication by users. HTH j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + ==
Re: Slow list.. why ?
On Fri, May 02, 2003 at 02:34:17AM +0200, Maarten van der Hoef wrote: Every day I see multiple replies with the same suggestions just because the repliers weren't able to see the latest reply. As hardware costs about null these days, I wouldn't know any other bottleneck (bandwidth, nehh ). So what's the problem with this list ? Don't get me wrong, I'm very happy with this list, just curious about it's big latency. Your comment seems like it's wondering about the list server software and the machine on which it resides. Sure, that could be a factor. I don't know the specs on that, nor on the volume of this list. However, you have to consider that everyone on the list has another mailserver they get to deal with. So, if the listserver sends its mail to me and my DSL connection is down, it bounces. Try again in four hours. It's up? OK, it delivers. THat's 4 hours. I've noticed, running just shy of a dozen lower volume lists, that often some of the larger providers will just stop accepting mail. Nope, I'm not going to take that mail. Try later. So, my list server queues it up and tries again four hours later. I've had instances where providers refuse mail for DAYS, though it's more often just hours. And then you have time zones. Funny how the people in Australia always seem to be so chipper when I'm so sleepy! No, sorry, you'll have to wait for my reply until I'm awake. And I'm not sure or not, but in most list servers, you can set your self up for 'digest mode' because you hate the inane babble repeatedly during the day, so you subject yourself to it only in one big massive dose so it feels less painful. To wonder about the list and people replying late is less a question of the server that's sending the mail out, as it's only ONE factor of many. I'd be confident that the list server is beefy enough for what it's being asked to do, though I could be wrong. Take yer pick as to why people 'reply late' to questions, but there's a lot of different answers as to why. (wondering how long ago you wrote your post and how much time has elapsed until my reply. Perhaps I should have waited a couple days, just for effect ;) j -- == + It's simply not | John Keimel+ + RFC1149 compliant!| [EMAIL PROTECTED]+ + | http://www.keimel.com + ==
Re: easy lilo question
On Sun, Mar 16, 2003 at 02:31:23AM +0100, Marco Kammerer wrote: i hope its an easy lilo question for you but i am about to kick that server :( i have a 3ware ATA Raid 7000-2b controller together with an promises fasttrakt controller together with 4 normal IDEs in a box i run woody with 2.4.18 bf24 (now) i want to boot from my 3ware mirror on sda --snip-- Warning: BIOS drive 0x82 may not be accessible Warning: BIOS drive 0x82 may not be accessible Warning: BIOS drive 0x82 may not be accessible Warning: BIOS drive 0x82 may not be accessible Added Linux Warning: BIOS drive 0x82 may not be accessible Added LinuxOLD * how can i get lilo again to work? --snip-- I had a similar problem in trying to boot from SCSI with IDE drives present. My motherboard did not like to boot from SCSI even though I told the SCSI controllers that they were supposed to handle booting (tried using the adaptec AND some buslogic, but the motherboard always took over). I ended up adding the following to my /etc/lilo.conf -- begin part of lilo.conf -- # Overrides the default mapping between harddisk names and the BIOS' # harddisk order. Use with caution. disk=/dev/hda bios=0x81 disk=/dev/hdb bios=0x82 disk=/dev/sda bios=0x80 # Specifies the boot device. This is where Lilo installs its boot # block. It can be either a partition, or the raw device, in which # case it installs in the MBR, and will overwrite the current MBR. # boot=/dev/sda # Specifies the device that should be mounted as root. (`/') # root=/dev/sda3 --end selected portion of lilo.conf-- I looked for some time to find help on this and finally reread the man page along with some info I found online (can't remember where) and came up with this solution. It was the only way I could get the system to boot to the SCSI drive, I believe because the motherboard was being grumpy. I know that if I add new IDE drives (which is likely to happen since I've just filled the 40G RAID1 I have) I will have to edit my lilo.conf to recognize the drives differently. I cannot add IDE drives without editing lilo.conf - well, at least not to get it to boot, I think. Standard 'it works for me but lilo can screw up everything on you' warning applies right here - don't blame me. I'd suggest googling for 'bios=0x80' and you're likely to find a bunch of helpful information. Hope this helps you. j