On Tue, Jan 11, 2011 at 8:17 PM, Joseph L. Casale wrote:
> >I am looking at LDAP module for Apache httpd for authentication. The
> >'yum install' gives me 'mod_authz_ldap.i386 0:0.26-9.el5_5.1', whereas
> >on Apache documentation site I find mod_authNz_ldap module. Both
> >modules appear to be di
>I am looking at LDAP module for Apache httpd for authentication. The
>'yum install' gives me 'mod_authz_ldap.i386 0:0.26-9.el5_5.1', whereas
>on Apache documentation site I find mod_authNz_ldap module. Both
>modules appear to be different looking at available directives. Any
>clues or suggestions
On Tue, Jan 11, 2011 at 08:42:55PM -0700, compdoc wrote:
> zfs-fuse.x86_64 is from epel - at least some users trust that repo.
EPEL is very trustworthy, but I for one wouldn't use ZFS fuse for
anything "Enterprise" (though I would use it for testing, or personal
use).
As an aside, a company calle
zfs-fuse.x86_64 is from epel - at least some users trust that repo.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Jan 11, 2011, at 6:28 PM, Christopher Chan wrote:
> On Wednesday, January 12, 2011 10:07 AM, compdoc wrote:
>> I never said it was native. zfs-fuse.x86_64
>>
>
> Not a Centos or a RHEL package. Please don't bring up experimental
> software in threads that are comparing filesystems for productio
I didn't bring up experimental software - I thought that's what he was
using. I misread.
And it worked quite well, except for write speeds. There are some cool
features with zfs.
Trying to decide just what file system to use for these larger and larger
arrays is something I've been facing very re
On Wednesday, January 12, 2011 10:07 AM, compdoc wrote:
> I never said it was native. zfs-fuse.x86_64
>
Not a Centos or a RHEL package. Please don't bring up experimental
software in threads that are comparing filesystems for production use.
If you want to suggest ZFS, you should suggest that th
I never said it was native. zfs-fuse.x86_64
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Jan 11, 2011, at 5:17 PM, Digimer wrote:
> On 01/11/2011 08:00 PM, Christopher Chan wrote:
>> On Wednesday, January 12, 2011 08:51 AM, compdoc wrote:
> Lots of protection for your data? Let's see, super aggressive
> caching and
>>> no data journaling only metadata journaling, what on
On 01/11/11 5:34 PM, aurfal...@gmail.com wrote:
> On Jan 11, 2011, at 2:06 PM, Steve Thompson wrote:
>
>> On Tue, 11 Jan 2011, aurfal...@gmail.com wrote:
>>
>>> I'm attempting to use parted to create a partition on a 28TB volume
>>> which consists of 16x2TB drives configuired in a Raid 5 + spare, s
On Jan 11, 2011, at 2:06 PM, Steve Thompson wrote:
> On Tue, 11 Jan 2011, aurfal...@gmail.com wrote:
>
>> I'm attempting to use parted to create a partition on a 28TB volume
>> which consists of 16x2TB drives configuired in a Raid 5 + spare, so
>> total unformatted size is 28TB to the OS..
>
> I
On Jan 11, 2011, at 7:51 PM, "compdoc" wrote:
>>> Lots of protection for your data? Let's see, super aggressive caching and
> no data journaling only metadata journaling, what on earth are you
> blabbering about?
>
>>> Use XFS with anything that has no BBU cache support or barrier support and
>
On 01/11/2011 08:00 PM, Christopher Chan wrote:
> On Wednesday, January 12, 2011 08:51 AM, compdoc wrote:
Lots of protection for your data? Let's see, super aggressive caching and
>> no data journaling only metadata journaling, what on earth are you
>> blabbering about?
>>
Use XFS with an
On Wednesday, January 12, 2011 08:51 AM, compdoc wrote:
>>> Lots of protection for your data? Let's see, super aggressive caching and
> no data journaling only metadata journaling, what on earth are you
> blabbering about?
>
>>> Use XFS with anything that has no BBU cache support or barrier support
>>Lots of protection for your data? Let's see, super aggressive caching and
no data journaling only metadata journaling, what on earth are you
blabbering about?
>>Use XFS with anything that has no BBU cache support or barrier support and
recent files are toast when there is a crash or sudden power
On Wednesday, January 12, 2011 02:55 AM, compdoc wrote:
> XFS is safe - lots of protection for your data, but it cuts write speeds in
> half.
When did XFS start looking like reiserfs?
Lots of protection for your data? Let's see, super aggressive caching
and no data journaling only metadata journ
I think it's better to let parted decide how big the partition can be:
mkpart primary 0 -1
That should create a partition without a fs type, (no ext3, etc) starting at
zero, and using all available.
if you have Advanced Format hard drives which are being sold these days,
they say you can have pe
On Jan 11, 2011, at 3:39 PM, Alan Hodgson wrote:
> On January 11, 2011 03:16:23 pm aurfal...@gmail.com wrote:
>> mkpart 0 3T it works and the partition is 3TB.
>>
>> This is a hardware based Areca RAID. I didn't feel the need to load
>> any Areca drivers as Centos supports this out the box.
>>
>>
On January 11, 2011 03:16:23 pm aurfal...@gmail.com wrote:
> mkpart 0 3T it works and the partition is 3TB.
>
> This is a hardware based Areca RAID. I didn't feel the need to load
> any Areca drivers as Centos supports this out the box.
>
> Any ideas?
>
Maybe it can only make 16TB partitions?
On Jan 11, 2011, at 2:56 PM, compdoc wrote:
> mklabel gpt
>
> then use zfs and zpool commands. Lots of good info on google.
Well, I did that and it still shows 2199GB.
Any ideas why or am I hung up on benign errors.
When ever I do this in partd;
mkpart primary 0 26T
I get;
Error: The locati
Hi,
I am looking at LDAP module for Apache httpd for authentication. The
'yum install' gives me 'mod_authz_ldap.i386 0:0.26-9.el5_5.1', whereas
on Apache documentation site I find mod_authNz_ldap module. Both
modules appear to be different looking at available directives. Any
clues or suggestions
mklabel gpt
then use zfs and zpool commands. Lots of good info on google.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Jan 11, 2011, at 2:06 PM, Steve Thompson wrote:
> On Tue, 11 Jan 2011, aurfal...@gmail.com wrote:
>
>> I'm attempting to use parted to create a partition on a 28TB volume
>> which consists of 16x2TB drives configuired in a Raid 5 + spare, so
>> total unformatted size is 28TB to the OS..
>
> I
On Tue, 11 Jan 2011, aurfal...@gmail.com wrote:
> I'm attempting to use parted to create a partition on a 28TB volume
> which consists of 16x2TB drives configuired in a Raid 5 + spare, so
> total unformatted size is 28TB to the OS..
I don't know the answer to your parted question, but let me be t
On Jan 11, 2011, at 1:45 PM, Lisandro Grullon wrote:
What filesystem are you planning to use, I am hoping for XFS in such
a large volume.
Yes, i posted earlier and was convinced of XFS.
Any ideas why parted is showing the wrong size?
- aurf___
C
What filesystem are you planning to use, I am hoping for XFS in such a large
volume.
>>> 1/11/2011 4:41 PM >>>
Hello again,
Been an interesting day.
I'm attempting to use parted to create a partition on a 28TB volume
which consists of 16x2TB drives configuired in a Raid 5 + spare, so
total
Hello again,
Been an interesting day.
I'm attempting to use parted to create a partition on a 28TB volume
which consists of 16x2TB drives configuired in a Raid 5 + spare, so
total unformatted size is 28TB to the OS..
However upon entering parted, and making a gpt label, print reports
back
- Original Message -
| On Jan 11, 2011, at 10:59 AM, Joshua Baker-LePain wrote:
|
| > On Tue, 11 Jan 2011 at 1:49pm, Digimer wrote
| >
| >> On 01/11/2011 01:47 PM, aurfal...@gmail.com wrote:
| >>> Hi all,
| >>>
| >>> I've a 30TB hardware based RAID array.
| >>>
| >>> Wondering what you all
On Sunday, January 09, 2011 05:31:25 pm Kai Schaetzl wrote:
> As I
> understand once LVM gets loaded it should find the volumes by itself, but
> will it be able to use the same naming scheme for instance? Or do I have
> to do some additional stuff, anyway?
I've done this, and there are a couple
On Tuesday, January 11, 2011 01:47:54 pm Kwan Lowe wrote:
> Also note that in some cases the lvm tools must be called by
> specifying lvm before the command
>
> lvm pvscan
> lvm vgchange -ay VolGroup00
To have in the archive, note that this is the case in the dracut shell
(accessed at boot on er
On Tue, Jan 11, 2011 at 3:12 PM, Blake Hudson wrote:
>
> I have been waiting for RHEL6/CentOS6 because, as I understand it,
> CentOS5 does not have a statefull IP6 firewall - e.g. incoming traffic
> would have to have a default ACCEPT policy or only specific applications
> allowed (based on source
On Tuesday, January 11, 2011 01:47:33 pm aurfal...@gmail.com wrote:
> I've a 30TB hardware based RAID array.
>
> Wondering what you all thought of using ext4 over XFS.
XFS. But make sure you're using a 64-bit CentOS. 32-bit CentOS (at least C5
of six months or so ago) will in fact run mkfs.xfs
On Tue, Jan 11, 2011 at 02:12:15PM -0600, Blake Hudson wrote:
> From: Stephen Harris
> > I have a HE tunnel (tunnelbroker.net) IPv6 tunnel. This works pretty
> > well and is simple to setup. Everything works fine.
> >
> > Until I try to set up an ip6tables firewall.
> I have been waiting for R
Original Message
Subject: [CentOS] IPv6, HE tunnel and ip6tables problems
From: Stephen Harris
To: CentOS mailing list
Date: Tuesday, January 11, 2011 1:09:25 PM
> CentOS 5.5, fully patched.
>
> I have a HE tunnel (tunnelbroker.net) IPv6 tunnel. This works pretty
> well and
On Tue, 11 Jan 2011 at 11:12am, aurfal...@gmail.com wrote
> My RAID has a strip size of of 32KB and a block size of 512bytes.
>
> I've usually just done blind XFS formats but would like to tune it for
> smaller files. Of course big/small is relative but in my env, small
> means sub 300MB or so.
>
On 01/11/2011 11:07 AM, aurfal...@gmail.com wrote:
> On Jan 11, 2011, at 11:01 AM, Benjamin Franz wrote:
>
>> On 01/11/2011 10:56 AM, aurfal...@gmail.com wrote:
>>>
>>> I read where ext4 supports 1EB partition size
>>
>> The format supports it - the e2fsprogs tools do not. 16TB is the
>> practical
On Jan 11, 2011, at 10:59 AM, Joshua Baker-LePain wrote:
> On Tue, 11 Jan 2011 at 1:49pm, Digimer wrote
>
>> On 01/11/2011 01:47 PM, aurfal...@gmail.com wrote:
>>> Hi all,
>>>
>>> I've a 30TB hardware based RAID array.
>>>
>>> Wondering what you all thought of using ext4 over XFS.
>>>
>>> I've bee
CentOS 5.5, fully patched.
I have a HE tunnel (tunnelbroker.net) IPv6 tunnel. This works pretty
well and is simple to setup. Everything works fine.
Until I try to set up an ip6tables firewall.
eg if I try to view https://dnssec.surfnet.nl/?p=464 then the page never
displays and the firewall sh
On Jan 11, 2011, at 11:01 AM, Benjamin Franz wrote:
> On 01/11/2011 10:56 AM, aurfal...@gmail.com wrote:
>>
>> I read where ext4 supports 1EB partition size
>
> The format supports it - the e2fsprogs tools do not. 16TB is the
> practical limit.
>
Have you installed e4fsprogs?
- aurf
__
On Tue, Jan 11, 2011 at 1:47 PM, wrote:
>
> Hi all,
>
> I've a 30TB hardware based RAID array.
>
> Wondering what you all thought of using ext4 over XFS.
>
> I've been a big XFS fan for years as I'm an Irix transplant but would
> like your opinions.
>
> This 30TB drive will be an NFS exported asse
I use ext4 on my tiny 8TB arrays. Centos 5.5 does support it, although the
gui tools have small issues with it.
Centos 6 should support it better...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On 01/11/2011 10:56 AM, aurfal...@gmail.com wrote:
>
> I read where ext4 supports 1EB partition size
The format supports it - the e2fsprogs tools do not. 16TB is the
practical limit.
--
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://l
On Tue, 11 Jan 2011 at 1:49pm, Digimer wrote
> On 01/11/2011 01:47 PM, aurfal...@gmail.com wrote:
>> Hi all,
>>
>> I've a 30TB hardware based RAID array.
>>
>> Wondering what you all thought of using ext4 over XFS.
>>
>> I've been a big XFS fan for years as I'm an Irix transplant but would
>> like
On Jan 11, 2011, at 10:49 AM, Digimer wrote:
> On 01/11/2011 01:47 PM, aurfal...@gmail.com wrote:
>> Hi all,
>>
>> I've a 30TB hardware based RAID array.
>>
>> Wondering what you all thought of using ext4 over XFS.
>>
>> I've been a big XFS fan for years as I'm an Irix transplant but would
>> like
XFS is safe - lots of protection for your data, but it cuts write speeds in
half.
Ext4 does not slow things down...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On 01/11/2011 01:47 PM, aurfal...@gmail.com wrote:
> Hi all,
>
> I've a 30TB hardware based RAID array.
>
> Wondering what you all thought of using ext4 over XFS.
>
> I've been a big XFS fan for years as I'm an Irix transplant but would
> like your opinions.
>
> This 30TB drive will be an NFS
On Sat, Jan 8, 2011 at 4:27 PM, Johan Martinez wrote:
> Hi,
> I am trying to recover data from my old system which had LVM. The disk had
> two partitions - /dev/sda1 (boot, Linux) and /dev/sda2 (Linux LVM). I had
> taken a backup of both partitions using dd.
> Now I am booting of CentOS live cd fo
Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would
like your opinions.
This 30TB drive will be an NFS exported asset for my users housing
home dirs and other frequently a
On Sat, Jan 8, 2011 at 7:17 PM, Johan Martinez wrote:
>
>
> On Sat, Jan 8, 2011 at 3:50 PM, Lamar Owen wrote:
>
>> On Saturday, January 08, 2011 04:27:39 pm Johan Martinez wrote:
>>
>> > Now I am booting of CentOS live cd for system restore. I recreated
>> > partitions like previous system using
On 1/11/2011 10:18 AM, lheck...@users.sourceforge.net wrote:
>
>> Hashing 4 values to 4 targets seems like collisions would be likely no
>> matter how you do it. The TX packet/byte values from ifconfig on the
>> NICs should show how much went out each interface.
>
> Yes, we checked that in addit
lheck...@users.sourceforge.net wrote:
>>I guess you need to look at the bonding src code - looks like it is in
>>drivers/net/bonding/bond_main.c - for CentOS 5 it is:
>
>
> C xor is bitwise.
>
> I did a bit of scripting and found that the algorithm seems much more
> sensitive to port number
> Hashing 4 values to 4 targets seems like collisions would be likely no
> matter how you do it. The TX packet/byte values from ifconfig on the
> NICs should show how much went out each interface.
Yes, we checked that in addition to iperf's output. One interface was
essentially idle.
---
I've been using Gmail and thought you might like to try it out. Here's an
invitation to create an account.
You're Invited to Gmail!
mahmoud mansy has invited you to open a Gmail account.
Gmail is Google's free email service, built on the idea that email can be
intuitive, efficient, and fun. G
On 1/11/2011 10:05 AM, lheck...@users.sourceforge.net wrote:
>
>> I guess you need to look at the bonding src code - looks like it is in
>> drivers/net/bonding/bond_main.c - for CentOS 5 it is:
>
> C xor is bitwise.
>
> I did a bit of scripting and found that the algorithm seems much more
> s
> I guess you need to look at the bonding src code - looks like it is in
> drivers/net/bonding/bond_main.c - for CentOS 5 it is:
C xor is bitwise.
I did a bit of scripting and found that the algorithm seems much more
sensitive to port numbers than IP addresses. Not that iperf gives much
co
hey guys,
sorry for that mistake it wasnot the centos5.5 iam trying actually ihad that
problem with centos5.4 and i manage to solve it but now the problem is with
the oracle enterprise linux (not unbreakable ) and i can`t solve as u say
previous!!!
On Mon, Jan 3, 2011 at 8:41 AM, cornel panceac
2011/1/11 Peter Kjellström :
>>
>> So no driver installed.. There's a link I found:
>
> "Unknown device" from lspci does not in the general case imply a lack of
> driver. The only thing it says is that the pci-id database does not contain an
> entry for the component. The command "update-pciids" w
lheck...@users.sourceforge.net wrote:
>>According to the Linux bonding docs, xmit_hash_policy=layer3+4 uses:
>>
>> ((source port XOR dest port) XOR
>> ((source IP XOR dest IP) AND 0x)
>> modulo slave count
>>
>>So I guess you could plug in in the above IP addresses an
> According to the Linux bonding docs, xmit_hash_policy=layer3+4 uses:
>
>((source port XOR dest port) XOR
> ((source IP XOR dest IP) AND 0x)
> modulo slave count
>
> So I guess you could plug in in the above IP addresses and port numbers
> and see if you get
That sounds great Jerry, the kernel upgrade worked for you. Just make sure you
monitor that module since it is new and it might have glitches still. Keep
that module up to date in your agenda.
>>> Jerry Geis 1/11/2011 9:30 AM >>>
I downloaded 2.6.34.8 - compiled and ran the new kernel making
I downloaded 2.6.34.8 - compiled and ran the new kernel making sure to
enable XHCI and the device is now registered
with lsusb.
Jerry
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
lheck...@users.sourceforge.net wrote:
> I have a Dell server with four bonded, gigabit interfaces. Bonding mode is
> 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf,
> I never get more than a total of about 3Gbps throughput. Is there anything
> to tweak to get better thro
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is
802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf,
I never get more than a total of about 3Gbps throughput. Is there anything
to tweak to get better throughput? Or am I running into other limits (e.
From: mcclnx mcc
> we have several DELL R900 with PERC 6/E adapter in it. R900 using Redhat
>Linux. Each R900 have two PERC 6/E adapter and at least two MD1000 connect
>to
>it.
> Configuration 1:
> PERC 6/E -- two MD1000
> PERC 6/E -- empty
> Configuration 2:
>
On Monday, January 10, 2011 08:50:18 pm Kwan Lowe wrote:
> On Mon, Jan 10, 2011 at 2:36 PM, Jerry Geis wrote:
...
> [snip]
>
> > 01:00.0 USB Controller: NEC Corporation Unknown device 0194 (rev 03)
>
> [snip]
>
> So no driver installed.. There's a link I found:
"Unknown device" from lspci does
65 matches
Mail list logo