Rodney Richison said on Sat, Nov 06, 2004 at 09:19:40PM -0600:
> Are most of you using exim or postfix? Just curious. I've never tried
> exim.
Don't know about most; I use Postfix. I don't think exim is a bad choice,
though; I just liked Postfix better, and it performs well enough to meet my
n
martin f krafft said on Sat, Nov 06, 2004 at 12:30:06PM +0100:
> also sprach Mark Ferlatte <[EMAIL PROTECTED]> [2004.11.06.0123 +0100]:
> > Do you really want your user's crontabs to run on every host in your
> > cluster?
>
> They are mounted from master:/srv/va
martin f krafft said on Fri, Nov 05, 2004 at 02:43:02PM +0100:
> I am trying to set up persistent crontabs in a FAI cluster by using
> NFS to export /var/spool/cron/crontabs to the clients, thus
> effectively storing the crontabs on the server. I further would like
> to use root_squash.
Do you re
martin f krafft said on Sat, Oct 30, 2004 at 01:35:33AM +0200:
> FWIW, there is no cfengine host (yet). I am still somewhat taken
> aback by its complexity. Just reinstalling the machines with FAI
> seems simpler and cleaner.
Yeah, I haven't gotten around to using it in production either. :)
>
Martin F Krafft said on Fri, Oct 29, 2004 at 07:03:02PM +0200:
> As far as I can tell, there remains one problem: we use SSH
> hostbased authentication between the nodes, and while I finally got
> that to work, every machine gets a new host key on every
> reinstallation, requiring the global databa
martin f krafft said on Fri, Oct 29, 2004 at 10:38:39AM +0200:
> In /etc/resolv.conf, the search parameter can take multiple values.
> However, when using DHCP, this field is populated by 'option
> domain-name', which lists the domain name only, and must not do
> anything else, or headless clients
Simon Buchanan said on Fri, Oct 15, 2004 at 08:20:15AM +1300:
> Hi There, I am looking to deploy some 1U rack servers based on the Intel
> Entry Server Platform SR1325TP1-E, but using a 3ware Escalade 9500S-4LP
> hardware raid with 3 x SATA 200GB drives (RAID 5) instead of the onboard
> stuff (a
Fred Whipple said on Mon, Feb 02, 2004 at 01:24:24PM -0500:
> I see that Debian 3.0r2 includes a nicely aged (like fine cheese) Linux
> 2.2 kernel. While I'm certain the aging process only makes its flavour
> stronger and more delectable, I'm afraid it's going to choke at the
> thought of 10,00
Fred Whipple said on Mon, Feb 02, 2004 at 01:24:24PM -0500:
> I see that Debian 3.0r2 includes a nicely aged (like fine cheese) Linux
> 2.2 kernel. While I'm certain the aging process only makes its flavour
> stronger and more delectable, I'm afraid it's going to choke at the
> thought of 10,00
Daniel Erat said on Thu, Jan 29, 2004 at 08:08:49AM -0800:
> I was the poster who initiated the previous thread on this subject. The
> problem disappeared here after we went down to 2 GB of memory (although
> we physically removed it from the server rather than passing the arg to
> the kernel... s
Daniel Erat said on Thu, Jan 29, 2004 at 08:08:49AM -0800:
> I was the poster who initiated the previous thread on this subject. The
> problem disappeared here after we went down to 2 GB of memory (although
> we physically removed it from the server rather than passing the arg to
> the kernel... s
Benjamin Sherman said on Wed, Jan 28, 2004 at 03:16:56PM -0600:
> >I've got some machines in nearly the same configuration. What I ended up
> >doing was to put an `append="mem=1G"' in the lilo.conf boot stanza for the
> >kernel I was using, and rebooted the machine in question.
> >
> >This does
Benjamin Sherman said on Tue, Jan 27, 2004 at 03:49:24PM -0600:
> So, I have a couple of questions because this box made it to production
> before the problem was discovered and I can't test as I'd like.
> * If I were to use 64GB HIGHMEM support. Would this problem go away?
Nope.
> * Is the I/O
Benjamin Sherman said on Wed, Jan 28, 2004 at 03:16:56PM -0600:
> >I've got some machines in nearly the same configuration. What I ended up
> >doing was to put an `append="mem=1G"' in the lilo.conf boot stanza for the
> >kernel I was using, and rebooted the machine in question.
> >
> >This does
Benjamin Sherman said on Tue, Jan 27, 2004 at 03:49:24PM -0600:
> So, I have a couple of questions because this box made it to production
> before the problem was discovered and I can't test as I'd like.
> * If I were to use 64GB HIGHMEM support. Would this problem go away?
Nope.
> * Is the I/O
Fred Whipple said on Wed, Jan 14, 2004 at 09:56:35AM -0500:
> 1.) One of the biggest reasons we went with Red Hat many years ago was
> RPM. Of course I know that Debian has a package system, and there're
> constant arguments about which is better, if either. What I wonder,
> though, is how th
Fred Whipple said on Wed, Jan 14, 2004 at 09:56:35AM -0500:
> 1.) One of the biggest reasons we went with Red Hat many years ago was
> RPM. Of course I know that Debian has a package system, and there're
> constant arguments about which is better, if either. What I wonder,
> though, is how th
Dave Watkins said on Wed, Nov 26, 2003 at 06:38:39PM +1300:
> Mark Ferlatte wrote:
> >Which lists? I've had a hell of a time with SCSI SCA connected disks; a
> >single bad SCSI disk can wipe out the whole chain, whereas with SATA that
> >seems to be less likely. I&
Dave Watkins said on Wed, Nov 26, 2003 at 06:38:39PM +1300:
> Mark Ferlatte wrote:
> >Which lists? I've had a hell of a time with SCSI SCA connected disks; a
> >single bad SCSI disk can wipe out the whole chain, whereas with SATA that
> >seems to be less likely. I&
Nate Duehr said on Tue, Nov 25, 2003 at 09:13:48AM -0700:
> Agreed on the "as fast a CPU as you can afford" and the 10K RPM disk
> comments. However I'm not a huge fan of SATA yet. There's been quite a
> bit of discussion on various mailing lists of people having trouble with
> them. I'm old-
Nate Duehr said on Tue, Nov 25, 2003 at 09:13:48AM -0700:
> Agreed on the "as fast a CPU as you can afford" and the 10K RPM disk
> comments. However I'm not a huge fan of SATA yet. There's been quite a
> bit of discussion on various mailing lists of people having trouble with
> them. I'm old-
21 matches
Mail list logo