Re: 32-bit vs AMD64 on Opteron for LAMP server

2007-07-07 Thread Roberto C . Sánchez
On Sat, Jul 07, 2007 at 04:40:00PM -0400, Jim Crilly wrote:
 
 For 32-bit systems only if the kernel is compiled with CONFIG_HIGHMEM64G
 enabled so you need one of the bigmem kernels. And the BIOS on the machine
 has to support remapping the lost memory above the 4G mark, if it won't do
 that for you there's nothing you can do to get access to that memory.
 
The stock Debian kernels are configured like this:

CONFIG_HIGHMEM4G=y
# CONFIG_HIGHMEM64G is not set
CONFIG_HIGHMEM=y

So, if you have a machine with 4-64 GB RAM, then a custom kernel is in
order.  Of course, as far as the BIOS goes, if the machine supports more
than 4 GB RAM, then the BIOS should as well.  After all, why would
someone manufacture a machine that can handle more than 4 GB RAM and
then put in a BIOS that cannot?

Regards,

-Roberto
-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: 32-bit vs AMD64 on Opteron for LAMP server

2007-07-07 Thread Roberto C . Sánchez
On Sat, Jul 07, 2007 at 05:39:54PM -0400, Jim Crilly wrote:
 On 07/07/07 04:45:57PM -0400, Roberto C. Sánchez wrote:
   
  The stock Debian kernels are configured like this:
  
  CONFIG_HIGHMEM4G=y
  # CONFIG_HIGHMEM64G is not set
  CONFIG_HIGHMEM=y
  
  So, if you have a machine with 4-64 GB RAM, then a custom kernel is in
  order.  Of course, as far as the BIOS goes, if the machine supports more
  than 4 GB RAM, then the BIOS should as well.  After all, why would
  someone manufacture a machine that can handle more than 4 GB RAM and
  then put in a BIOS that cannot?
  
 
 No, even with just 4G you need CONFIG_HIGHMEM64G because to access the
 memory from ~3.5G-4G you need to remap it above the 4G mark since those
 addresses were stolen by the various hardware components in your system
 so you need a kernel able to address 4G.
 
Please read again my first sentence.  You and I are in agreement on
this, just saying it in different ways.

Regards,

-Roberto

-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: 32-bit vs AMD64 on Opteron for LAMP server

2007-07-06 Thread Roberto C . Sánchez
On Fri, Jul 06, 2007 at 07:29:25AM -0700, [EMAIL PROTECTED] wrote:
 Quoting Neil Gunton [EMAIL PROTECTED]:
 
 
 Does anybody else have any opinions on whether software RAID would be
 faster than using the built-in Adaptec SmartRaid 2015S card?
 
 
 Etch has full SW raid support right out of the install. No messing  
 with kernels.
 You can even install raid on the / during the install if you wanted.
 I didn't want to get into a hardware vs software raid conversation,  
 but figured if you had a few days of down time, it would nice to run  
 some tests before hand.
 
The determining factor for whether to use hardware RAID or software
RAID is whether your hardware RAID actually does hardware RAID.  Do
a Google search on fake RAID to get an idea of what I mean.  Many
hardware RAID cards, especially the cheaper varieties, are nothing
more than proprietary software RAID implemented in the card BIOS.  This
is something that you DO NOT want.  Use something that has REAL hardware
RAID (Intel MegaRAID, HP SmartArray, 3Ware, etc), or just use the Linux
built-in software RAID.

Regards,

-Roberto

-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: 32-bit vs AMD64 on Opteron for LAMP server

2007-07-06 Thread Roberto C . Sánchez
On Fri, Jul 06, 2007 at 04:07:23PM +0100, Adam Stiles wrote:
 
 You won't be able to use all of your 4GB RAM with a 32-bit kernel.  A 32-bit 
 processor only has 4GB of addressing space, and that has to be shared between 
 memory and peripherals.
 
Not true.  With PAE, a 36-bit address is available, allowing access to
64 GB of RAM.  What does not change, however, is that with a 32-bit
kernel no single process can address more than 4 GB of RAM.

 You'd also do better with Apache 2.0 or 2.2, as long as you use the prefork 
 version  (which is more compatible with PHP, if that's your chosen P).  The 
 breaking-up of the configuration files is a bit of a pain to deal with, but 
 worth it in the long run  (I knocked up a Perl script to break up a 1.3-style 
 configuration file into 2.0-style snippets; e-mail me if you are interested, 
 on-list if you think others would be interested).  Otherwise it's just like 
 1.3, only faster.
 
This is true.  Any pain spent transitioning from Apache v1 to Apache v2
or 2.2 is well worth it.

 If your RAID card is one of the ones that uses a binary-only driver, ignore 
 it 
 and use the open source drivers with md RAID instead -- md is faster than any 
 manufacturer's proprietary alternative  (anything that needs a driver is fake 
 RAID.  True hardware RAID never needs a special driver; the array just shows 
 up as a single drive).  Beside the non-polluted kernel, you also get the 
 advantage that you can have your swap area without redundancy, hence running 
 as fast as possible  (just set up the partitions as separate swap areas).
 
Clearly you don't know what you are talking about here.  If you look,
there are drivers in the kernel for megaraid (Intel), cciss (HP Smart
Array) and 3ware.  All of there are certainly hardware RAID.  In fact,
all of those are also in the kernel and free software drivers, developed
primarly by the hardware vendors.  In any case, your statement is
equivalent to saying A true video card never needs a special driver;
the card just shows up as a video device.  *Every* device on the system
needs a driver of some sort or another.

Regards,

-Roberto
-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: how to cleanly remove the chroot environment?

2007-06-21 Thread Roberto C . Sánchez
On Wed, Jun 20, 2007 at 07:23:57PM -, avishai wrote:
 
 /home on /var/chroot/sid-ia32/home type none (rw,bind)
 /tmp on /var/chroot/sid-ia32/tmp type none (rw,bind)
 /dev on /var/chroot/sid-ia32/dev type none (rw,bind)
 /proc on /var/chroot/sid-ia32/proc type none (rw,bind)

These are all bind mounts.  You can umount them by running this as root:

for i in home tmp dev proc ; do umount /var/chroot/sid-ia32/$i ; done

If you get any device busy (or similar) errors then that means that
something is still accessing one of those partitions.  You need to stop
or kill that process and then try again.

Regards,

-Roberto
-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: [off-topic] iptables and virtual network interfaces.

2007-06-11 Thread Roberto C . Sánchez
On Mon, Jun 11, 2007 at 02:27:19AM -0400, Bharath Ramesh wrote:
 I have a server where I am using virtual network interfaces or aliased
 interfaces, eth0:x format to listen to multiples ips on the same nic. I
 will be running different sites with different services based on the ip
 address. I want different iptables policy for these ip/interfaces. It
 seems like iptables doesnt understand virtual network interfaces at all.
 I tried to search for a solution but couldnt find any. I was wondering
 if anyone of you had experiences with setting up iptables with virtual
 network interfaces.
 
I'm not sure why you marked this OT, since it is certainly on-topic,
assuming you are running it on Debian :-)

Just use shorewall.

Regards,

-Roberto

-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: Shouldn't this work?

2007-05-07 Thread Roberto C . Sánchez
On Mon, May 07, 2007 at 04:35:02PM -0400, Wayne Topa wrote:
 
 Did I misunderstand somthing?  I 'thought' I read that the Athlon 
 was a 64 bit processor.  Am I wrong, again?
 
The Athlon64 is a 64-bit processor, as is the Opteron.  The regular
Athlon is definitely a 32-bit processor.

Regards,

-Roberto
-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: Problem creating 6TB partition...

2007-04-29 Thread Roberto C . Sánchez
On Sun, Apr 29, 2007 at 02:21:37PM +0200, Stefan Drees wrote:
 Hi,
 yesterday i installed debian etch amd64 on an FSC Server with an dell
 md1000 diskarray attached.
 The total size of the diskarray is 6TB and i can see the complete 6TB in
 fdisk and cfdisk.
 If i try to create an 6TB partition, it seems to work but after leavinf
 fdisk or cfdisk
 and reentering it shows me only an 1,5TB partition and i can´create
 another partition.
 
 Any hints? What can i do?
 
Some RAID hardware has a physical partition size (or logical disk size,
in terms of what Intel calls it) limitation.  At work we have an HP
StorageWorks tray with ~5.6TB of SCSI disks in it.  The Intel RAID card
which controls does not allow partitions bigger than 2TB.  The solution
for us was to create 3 ~1.8TB partitions and put them together as 3
physical volumes in a LVM volume group.  We then just created one bug
logical volume of ~5.6TB.

Regards,

-Roberto

-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: Problem creating 6TB partition...

2007-04-29 Thread Roberto C . Sánchez
On Sun, Apr 29, 2007 at 07:00:46PM +0200, Stefan Drees wrote:
 Ok, so i can create one big volume with parted (gpt) or 3 2TB
 volume's (DOS) and make one big volume with lvm.
 What´s the best way and which filesystem for such a big volume?
 
 Last time i used EXT3 for an 2TB volume, everythings fine but
 i need to disable the filesystem checks with tune2fs, because it
 needs too much time for an check :-). I´m not felling really good
 about that, is there a better solution?
 
There was a very long thread about what filesystem is best for large
partitions a while back on debian-user.  It is in the archives if you
are interested.

Personally, I would use XFS (first choice) or JFS (second choice) if
those are available to you.  Those are both available stock (as in no
need to recompile the kernel, assuming you use a Debian shipped kernel)
since Sarge.  If you cannot use either of those, then ext3 is
acceptable.  However, read the man page carefully and look at all the
options.  You can choose some of the paramters at filesystem creation
time to minimize things like the amount of time it takes to perform a
fsck.  For example, you can make your chunk sizes (or is it block
sizes?) bigger and have fewer superblock replicas, which will reduce the
time it takes to both create and fsck the filesystem.  Of course, you
need to understand how the filesystem will be accessed in order to make
the best choices.  That is, if you will have many small files, you don't
want to make the block size too big since it will waste much space.  If
you will have mostly large files, then make the block sizes really big,
since you won't waste too much space but will make filesystem access
faster.

Regards,

-Roberto
-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: Problem creating 6TB partition...

2007-04-29 Thread Roberto C . Sánchez
On Sun, Apr 29, 2007 at 02:58:40PM -0400, Mike Dresser wrote:
 Last time i used EXT3 for an 2TB volume, everythings fine but
 i need to disable the filesystem checks with tune2fs, because it
 needs too much time for an check :-). I´m not felling really good
 about that, is there a better solution?
 
 Keep in mind if you go with XFS, you're going to need 10-15 gig of 
 memory or swap space to fsck 6tb.. it needs about 9 gig to xfs_check, and 
 3 gig to xfs_repair a 4tb array on one of my systems.. oh, and a couple 
 days to do either. :)
 
Is it an exponential growth in the amount of time it takes?  I've had
some XFS partitions that were several hundred GBs (but not close to 1TB)
and those seemed to pass the fsck stage very quickly.

Regards,

-Roberto
-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature


Re: Problem creating 6TB partition...

2007-04-29 Thread Roberto C . Sánchez
On Sun, Apr 29, 2007 at 03:58:27PM -0400, Mike Dresser wrote:
 On Sun, 29 Apr 2007, Roberto C. Sánchez wrote:
 
 Is it an exponential growth in the amount of time it takes?  I've had
 some XFS partitions that were several hundred GBs (but not close to 1TB)
 and those seemed to pass the fsck stage very quickly.
 
 If i remember right, it's 1 gb of memory per TB of space, plus additional 
 memory overhead for X number of inodes.. i have both a large filesystem 
 and millions of hardlinks, so lots of inodes.
 
Interesting.  The filesystem to which I was referring stays mostly
empty.  That may explain the results I am seeing.

Regards,

-Roberto
-- 
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com


signature.asc
Description: Digital signature