Where is Wol's raid page? I'm about to build a raid box fro NAS.
--"Fascism begins the moment a ruling class, fearing the people may use their
political democracy to gain economic democracy, begins to destroy political
democracy in order to retain its power of exploitation and special privilege
On 01/10/2021 22:21, mad.scientist.at.la...@tutanota.com wrote:
Where is Wol's raid page? I'm about to build a raid box fro NAS.
https://raid.wiki.kernel.org/index.php/Linux_Raid
Cheers,
Wol
Am 07.10.20 um 10:40 schrieb Stefan G. Weichinger:
> Am 06.10.20 um 15:08 schrieb k...@aspodata.se:
>> Stefan G. Weichinger:
>>> I know the model: ICP5165BR
>>
>> https://ask.adaptec.com/app/answers/detail/a_id/17414/~/support-for-sata-and-sas-disk-drives-with-a-size-of-2tb-or-greater
>>
>> says it
Am 06.10.20 um 15:08 schrieb k...@aspodata.se:
> Stefan G. Weichinger:
>> I know the model: ICP5165BR
>
> https://ask.adaptec.com/app/answers/detail/a_id/17414/~/support-for-sata-and-sas-disk-drives-with-a-size-of-2tb-or-greater
>
> says it is supported up to 8TB drives using firmware v5.2.0 Buil
Stefan G. Weichinger:
> Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> > Stefan G. Weichinger:
> >> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> > ...
> >> What do you think, is 2 TB maybe too big for the controller?
> >
> 0a:0e.0 RAID bus controller: Adaptec AAC-RAID
> >
> > This does
On 05/10/2020 17:01, Stefan G. Weichinger wrote:
Am 05.10.20 um 17:19 schrieb Stefan G. Weichinger:
So my issue seems to be: non-working arcconf doesn't let me "enable"
that one drive.
Some kind of progress.
Searched for more and older releases of arcconf, found Version 1.2 that
doesn't cras
Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> Some guesses:
>
> https://wiki.debian.org/LinuxRaidForAdmins#aacraid
> says that it requires libstd++5
>
> arcconf might fork and exec, one could try with strace and try to
> see what happens
>
> one could, if the old suse dist. is available
Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> Stefan G. Weichinger:
>> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> ...
>> What do you think, is 2 TB maybe too big for the controller?
>
0a:0e.0 RAID bus controller: Adaptec AAC-RAID
>
> This doesn't really tells us which controller it
Stefan G. Weichinger:
> Am 05.10.20 um 16:38 schrieb k...@aspodata.se:
...
> But no luck with any version of arcconf so far. Unpacked several zips,
> tried 2 releases, 32 and 64 bits .. all crash.
>
> > Just a poke in the dark, does ldd report all libs found, as in:
> > $ ldd /bin/ls
> > l
Stefan G. Weichinger:
> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
...
> What do you think, is 2 TB maybe too big for the controller?
>>> 0a:0e.0 RAID bus controller: Adaptec AAC-RAID
This doesn't really tells us which controller it is, try with
lspci -s 0a:0e.0 -nn
In the kernel source on
Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> What if you put it on the 53c1030 card, can you do that, at least to
> verify the disk ?
I am 600kms away from that server and the people I could send to the
basement there aren't very competent in these things. I am afraid that
won't work out wel
Stefan G. Weichinger:
...
> Searched for more and older releases of arcconf, found Version 1.2 that
> doesn't crash here.
>
> This lets me view the physical device(s), but the new disk is marked as
> "Failed".
...
What if you put it on the 53c1030 card, can you do that, at least to
verify the di
Am 05.10.20 um 17:19 schrieb Stefan G. Weichinger:
> So my issue seems to be: non-working arcconf doesn't let me "enable"
> that one drive.
Some kind of progress.
Searched for more and older releases of arcconf, found Version 1.2 that
doesn't crash here.
This lets me view the physical device(s)
Am 05.10.20 um 16:57 schrieb Rich Freeman:
> If you're doing software RAID or just individual disks, then you're
> probably going to go into the controller and basically configure that
> disk as standalone, or as a 1-disk "RAID". That will make it appear
> to the OS, and then you can do whatever
Am 05.10.20 um 16:38 schrieb k...@aspodata.se:
> And theese on the aac, since they have the same scsi host, and I guess
> that scsi ch.0 is for the configured drives and ch.1 for the raw drives:
>> [1:0:1:0]diskICP SAS2 V1.0 /dev/sda
>> [1:0:2:0]diskICP Dev
On Mon, Oct 5, 2020 at 10:38 AM wrote:
>
> Stefan G. Weichinger:
> > On an older server the customer replaced a SAS drive.
> >
> > I see it as /dev/sg11, but not yes as /dev/sdX, it is not visible in "lsblk"
>
> Perhaps theese links will help:
>
> https://www.cyberciti.biz/faq/linux-checking-sas
Stefan G. Weichinger:
> On an older server the customer replaced a SAS drive.
>
> I see it as /dev/sg11, but not yes as /dev/sdX, it is not visible in "lsblk"
...
Not that I think it will help you much, but there is sys-apps/sg3_utils:
# lsscsi
[0:0:0:0]diskATA TOSHIBA MG03ACA3 FL1
On an older server the customer replaced a SAS drive.
I see it as /dev/sg11, but not yes as /dev/sdX, it is not visible in "lsblk"
Back then with an installed Suse Linux, I had some GUI tool to create a
VD on top of the physical drive and "enable" it ...
I am searching how to achieve that in g
On Tue, Jan 29, 2019 at 7:36 PM Grant Taylor
wrote:
>
> That assumes that there is a boot loader. There wasn't one with the old
> Slackware boot & root disks.
>
Linux no longer supports direct booting from the MBR.
arch/x86/boot/header.S
bugger_off_msg:
.ascii "Use a boot loader.\r\n"
Peter Humphrey:
...
> In my case I
> haven't needed an initramfs so far, and now I see I still don't need one -
> why
> add complication? Having set the kernel option to assemble raid devices at
> boot time, now that /dev/md0 has been created I find it ready to go as soon
> as
> I boot up an
On 01/29/2019 02:17 PM, Neil Bothwick wrote:
AFAIR the initramfs code is built into the kernel, not as an option. The
reason given for using a cpio archive is that it is simple and available
in the kernel. The kernel itself has an initramfs built into it which is
executed automatically, it's ju
On Tuesday, 29 January 2019 20:37:31 GMT Wol's lists wrote:
> On 28/01/2019 16:56, Peter Humphrey wrote:
> > I must be missing something, in spite of following the wiki instructions.
> > Can someone help an old duffer out?
>
> Gentoo wiki, or kernel raid wiki?
Gentoo wiki.
It's fascinating to se
On Tue, 29 Jan 2019 13:37:43 -0700, Grant Taylor wrote:
> > An initramfs typically loads kernel modules, assuming there are any
> > that need to be loaded.
>
> And where is it going to load them from if said kernel doesn't support
> initrds or loop back devices or the archive or file system ty
On Tue, Jan 29, 2019 at 20:58:37 +, Wol's lists wrote:
> On 29/01/2019 19:41, Grant Taylor wrote:
> > The kernel /must/ have (at least) the minimum drivers (and dependencies)
> > to be able to boot strap. It doesn't matter if it's boot strapping an
> > initramfs or otherwise.
> > All of the
On 29/01/2019 19:41, Grant Taylor wrote:
The kernel /must/ have (at least) the minimum drivers (and dependencies)
to be able to boot strap. It doesn't matter if it's boot strapping an
initramfs or otherwise.
All of these issues about lack of a driver are avoided by having the
driver statical
On Tue, Jan 29, 2019 at 3:37 PM Grant Taylor
wrote:
>
> On 01/29/2019 01:26 PM, Rich Freeman wrote:
> > Uh, an initramfs typically does not exec a second kernel. I guess it
> > could, in which case that kernel would need its own initramfs to get
> > around to mounting its root filesystem. Presum
On 01/29/2019 01:26 PM, Rich Freeman wrote:
Uh, an initramfs typically does not exec a second kernel. I guess it
could, in which case that kernel would need its own initramfs to get
around to mounting its root filesystem. Presumably at some point you'd
want to have your system stop kexecing k
On 28/01/2019 16:56, Peter Humphrey wrote:
I must be missing something, in spite of following the wiki instructions. Can
someone help an old duffer out?
Gentoo wiki, or kernel raid wiki?
Cheers,
Wol
On 29/01/2019 19:01, Rich Freeman wrote:
It would surely be a bug if the kernel were capable of manipulating RAIDs, but
not of initialising
and mounting them.
Linus would disagree with you there, and has said as much publicly.
He does not consider initialization to be the responsibility of ke
On Tue, Jan 29, 2019 at 3:15 PM Grant Taylor
wrote:
>
> On 01/29/2019 01:08 PM, Rich Freeman wrote:
>
> You seem to be focusing on the second kernel that the initramfs execs.
>
Uh, an initramfs typically does not exec a second kernel. I guess it
could, in which case that kernel would need its ow
On 01/29/2019 01:08 PM, Rich Freeman wrote:
Obviously. Hence the reason I said that it shouldn't matter if the
module is built in-kernel.
I'm saying it does matter.
I'm not sure why it seems like we're talking past each other here...
You seem to be focusing on the second kernel that the in
On Tue, Jan 29, 2019 at 2:59 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:47 PM, Rich Freeman wrote:
> > It couldn't. Hence the reason I said, "obviously it needs whatever
> > drivers it needs, but I don't see why it would care if they are built
> > -in-kernel vs in-module."
>
> You are missing wh
On Tue, Jan 29, 2019 at 2:52 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:33 PM, Rich Freeman wrote:
>
> > However, as soon as you throw so much as a second hard drive in a system
> > that becomes unreliable.
>
> Mounting the root based on UUID (or labels) is *WONDERFUL*. It makes
> the system MUC
On 01/29/2019 12:47 PM, Rich Freeman wrote:
It couldn't. Hence the reason I said, "obviously it needs whatever
drivers it needs, but I don't see why it would care if they are built
-in-kernel vs in-module."
You are missing what I'm saying.
Even the kernel the initramfs uses MUST have support
On 29/01/2019 16:48, Alan Mackenzie wrote:
Hello, All.
On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
On 01/29/2019 09:08 AM, Peter Humphrey wrote:
I'd rather not have to create an initramfs if I can avoid it. Would it
be sensible to start the raid volume by putting an mdadm --ass
On 01/29/2019 12:33 PM, Rich Freeman wrote:
If all my boxes could function reliably without an initramfs I probably
would do it that way.
;-)
However, as soon as you throw so much as a second hard drive in a system
that becomes unreliable.
I disagree.
I've been reliably booting and running
On Tue, Jan 29, 2019 at 2:41 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:01 PM, Rich Freeman wrote:
> >
> > That is news to me. Obviously it needs whatever drivers it needs, but
> > I don't see why it would care if they are built in-kernel vs in-module.
>
> How is a kernel going to be able to mou
On 01/29/2019 12:01 PM, Rich Freeman wrote:
Not sure why you would think this. It is just a cpio archive of a root
filesystem that the kernel runs as a generic bootstrap.
IMHO the simple fact that such is used when it is not needed is ugly part.
This means that your bootstrap for initializing
On Tue, Jan 29, 2019 at 2:22 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:04 PM, Rich Freeman wrote:
> > I don't see the value in using a different configuration on a box simply
> > because it happens to work on that particular box. Dracut is a more
> > generic solution that allows me to keep host
On 01/29/2019 12:04 PM, Rich Freeman wrote:
I don't see the value in using a different configuration on a box simply
because it happens to work on that particular box. Dracut is a more
generic solution that allows me to keep hosts the same.
And if all the boxes in the fleet can function witho
On Tue, Jan 29, 2019 at 1:54 PM Grant Taylor
wrote:
>
> On 01/29/2019 10:58 AM, Rich Freeman wrote:
> > Can't say I've tried it recently, but I'd be shocked if it changed much.
> > The linux kernel guys generally consider this somewhat deprecated
> > behavior, and prefer that users use an initramf
On Tue, Jan 29, 2019 at 1:39 PM Alan Mackenzie wrote:
>
> On Tue, Jan 29, 2019 at 12:58:38 -0500, Rich Freeman wrote:
> > Can't say I've tried it recently, but I'd be shocked if it changed
> > much. The linux kernel guys generally consider this somewhat
> > deprecated behavior, and prefer that us
On 01/29/2019 10:58 AM, Rich Freeman wrote:
Can't say I've tried it recently, but I'd be shocked if it changed much.
The linux kernel guys generally consider this somewhat deprecated
behavior, and prefer that users use an initramfs for this sort of thing.
It is exactly the sort of problem an in
Hello, Rich.
On Tue, Jan 29, 2019 at 12:58:38 -0500, Rich Freeman wrote:
> On Tue, Jan 29, 2019 at 11:48 AM Alan Mackenzie wrote:
> > On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> > > On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > > > I'd rather not have to create an initramfs
On Tue, Jan 29, 2019 at 11:48 AM Alan Mackenzie wrote:
>
> On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> > On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > > I'd rather not have to create an initramfs if I can avoid it. Would it
> > > be sensible to start the raid volume by puttin
On 01/29/2019 09:48 AM, Alan Mackenzie wrote:
However, there's another quirk which bit me: something in the Gentoo
installation disk took it upon itself to renumber my /dev/md2 to
/dev/md127. I raised bug #539162 for this, but it was decided not to
fix it. (This was back in February 2015.)
Hello, All.
On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > I'd rather not have to create an initramfs if I can avoid it. Would it
> > be sensible to start the raid volume by putting an mdadm --assemble
> > command into, say, /etc/l
On 01/29/2019 09:08 AM, Peter Humphrey wrote:
I'd rather not have to create an initramfs if I can avoid it. Would it
be sensible to start the raid volume by putting an mdadm --assemble
command into, say, /etc/local.d/raid.start? The machine doesn't boot
from /dev/md0.
Drive by comment.
I tho
On Tuesday, 29 January 2019 16:08:27 GMT Peter Humphrey wrote:
> On Tuesday, 29 January 2019 09:20:46 GMT Mick wrote:
>
> Hello Mick,
>
> --->8
>
> > Do you have CONFIG_MD_RAID1 (or whatever it should be these days) built in
> > your kernel?
>
> Yes, I have, but something else was missing: CONF
On Tuesday, 29 January 2019 09:20:46 GMT Mick wrote:
Hello Mick,
--->8
> Do you have CONFIG_MD_RAID1 (or whatever it should be these days) built in
> your kernel?
Yes, I have, but something else was missing: CONFIG_DM_RAID=y. This is in the
SCSI section, which I'd overlooked (I hadn't needed i
Hello Peter,
On Monday, 28 January 2019 16:56:57 GMT Peter Humphrey wrote:
> Hello list,
> When I run "mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda2
> /dev/ sdb2", this is what I get:
>
> # mdadm --stop /dev/md0
> mdadm: stopped /dev/md0
> # mdadm: /dev/sda2 appears to contain an
Hello list,
(I've been off-line for ten days and I haven't yet caught up with the list. I
had to send my machine to its maker to have a cooling-system hardware fault
fixed.)
I've added two SSDs to my workstation, intending to create a RAID-1 array on
them to store backups (which may be another
Thank you all! :) I finally have all clear.
I'm going to do raid 10. Any way, I'm going to do a benchmark before to
install.
Thank you!;)
2014-02-24 14:03 GMT-03:00 Jarry :
> On 24-Feb-14 7:27, Facundo Curti wrote:
>
> n= number of disks
>>
>> reads:
>>raid1: n*2
>>raid0: n*2
>>
>> wri
On 24-Feb-14 7:27, Facundo Curti wrote:
n= number of disks
reads:
raid1: n*2
raid0: n*2
writes:
raid1: n
raid0: n*2
But, in real life, the reads from raid 0 doesn't work at all, because if
you use "chunk size" from 4k, and you need to read just 2kb (most binary
files, txt files, e
On 24/02/2014 06:27, Facundo Curti wrote:
Hi. I am again, with a similar question to previous.
I want to install RAID on SSD's.
Comparing THEORETICALLY, RAID0 (stripe) vs RAID1 (mirrior). The
performance would be something like this:
n= number of disks
reads:
raid1: n*2
raid0: n*2
writ
Hi. I am again, with a similar question to previous.
I want to install RAID on SSD's.
Comparing THEORETICALLY, RAID0 (stripe) vs RAID1 (mirrior). The performance
would be something like this:
n= number of disks
reads:
raid1: n*2
raid0: n*2
writes:
raid1: n
raid0: n*2
But, in real life
>Please let us know what the performance is like when using the setup
you are thinking off.
Of course. I will post these here :)
2014-02-22 16:13 GMT-03:00 Facundo Curti :
> Thank you so much for the help! :) It was very useful.
>
> I just need wait my new PC, and try it *.* jeje.
>
> Bytes! ;)
Thank you so much for the help! :) It was very useful.
I just need wait my new PC, and try it *.* jeje.
Bytes! ;)
On 22/02/2014 11:41, J. Roeleveld wrote:
On Sat, February 22, 2014 06:27, Facundo Curti wrote:
Hi all. I'm new in the list, this is my third message :)
First at all, I need to say sorry if my english is not perfect. I speak
spanish. I post here because gentoo-user-es it's middle dead, and it's a
On 05/09/2013 07:13, J. Roeleveld wrote:
On Thu, September 5, 2013 05:04, James wrote:
Hello,
What would folks recommend as a Gentoo
installation guide for a 2 disk Raid 1
installation? My previous attempts all failed
to trying to follow (integrate info from)
a myriad-malaise of old docs.
I w
On Sat, 22 February 2014, at 5:27 am, Facundo Curti
wrote:
> ...
> I'm going to get a new PC with a disc SSD 120GB and another HDD of 1TB. But
> in a coming future, I want to add 2 or more disks SSD.
>
> Mi idea now, is:
>
> Disk HHD: /dev/sda
> /dev/sda1 26GB
> /dev/sda
On Sat, February 22, 2014 06:27, Facundo Curti wrote:
> Hi all. I'm new in the list, this is my third message :)
> First at all, I need to say sorry if my english is not perfect. I speak
> spanish. I post here because gentoo-user-es it's middle dead, and it's a
> great chance to practice my english
On Sat, Feb 22, 2014 at 12:41 AM, Canek Peláez Valdés wrote:
[ snip ]
> [1] http://article.gmane.org/gmane.linux.gentoo.user/269586
> [2] http://article.gmane.org/gmane.linux.gentoo.user/269628
Also, check [3], since the solution on [2] was unnecessarily complex.
Regards.
[3] http://comments.gm
On Fri, Feb 21, 2014 at 11:27 PM, Facundo Curti wrote:
> Hi all. I'm new in the list, this is my third message :)
> First at all, I need to say sorry if my english is not perfect. I speak
> spanish. I post here because gentoo-user-es it's middle dead, and it's a
> great chance to practice my engli
Hi all. I'm new in the list, this is my third message :)
First at all, I need to say sorry if my english is not perfect. I speak
spanish. I post here because gentoo-user-es it's middle dead, and it's a
great chance to practice my english :) Now, the problem.
I'm going to get a new PC with a disc S
On Tuesday 15 Oct 2013 20:28:46 Paul Hartman wrote:
> On Tue, Oct 15, 2013 at 2:34 AM, Mick wrote:
> > Hi All,
> >
> > I haven't had to set up a software RAID for years and now. I want to set
> > up two RAID 1 arrays on a new file server to serve SBM to MSWindows
> > clients. The first RAID1 ha
On Tue, Oct 15, 2013 at 2:34 AM, Mick wrote:
> Hi All,
>
> I haven't had to set up a software RAID for years and now. I want to set up
> two RAID 1 arrays on a new file server to serve SBM to MSWindows clients. The
> first RAID1 having two disks, where a multipartition OS installation will take
Hi All,
I haven't had to set up a software RAID for years and now. I want to set up
two RAID 1 arrays on a new file server to serve SBM to MSWindows clients. The
first RAID1 having two disks, where a multipartition OS installation will take
place. The second RAID1 having two disks for a sing
Am 05.09.2013 05:04, schrieb James:
Do you want to use a software raid of hardware raid?
File system that is best for a Raid 1 workstation?
Well, of course only file systems being supported by the rescue system
of your hosting provider.
File system that is best for a Raid 1
(casual usage)
On Thu, September 5, 2013 05:04, James wrote:
> Hello,
>
> What would folks recommend as a Gentoo
> installation guide for a 2 disk Raid 1
> installation? My previous attempts all failed
> to trying to follow (integrate info from)
> a myriad-malaise of old docs.
I would start with the Raid+LVM Qui
Hello,
What would folks recommend as a Gentoo
installation guide for a 2 disk Raid 1
installation? My previous attempts all failed
to trying to follow (integrate info from)
a myriad-malaise of old docs.
It seems much of the documentation for such is
deprecated, with large disk, newer file system
Thx a lot Paul,
this morning i noticed there was some kind of issue on my old initrd which
works fine for 2.6 kernels, so created a new initrd which works fine and
let me to boot into GNU/Gentoo Linux with same 3.5 bzImage.
Gonna check if the issue came from mdadm, thx :)
2012/10/26 Paul Hartman
On Fri, Oct 26, 2012 at 3:36 AM, Pau Peris wrote:
> Hi,
>
>
> i'm running GNU/Gentoo Linux with a custom compiled kernel and i've just
> migrated from a 2.6 kernel to a 3.5.
>
>
> As my HD's are on raid 0 mode i use a custom initrd file in order to be able
> to boot. While kernel 2.6 is able to bo
Hi,
thanks a lot for both answers.
I've just checked my kernel config and CONFIG_SCSI_SCAN_ASYNC is not setted
so gonna take a look at it all with "set -x".
Thanks :)
2012/10/26 J. Roeleveld
> Pau Peris wrote:
>>
>> Hi,
>>
>>
>> i'm running GNU/Gentoo Linux with a custom compiled kernel and
Pau Peris wrote:
>Hi,
>
>
>i'm running GNU/Gentoo Linux with a custom compiled kernel and i've
>just
>migrated from a 2.6 kernel to a 3.5.
>
>
>As my HD's are on raid 0 mode i use a custom initrd file in order to be
>able to boot. While kernel 2.6 is able to boot without problems the new
>3.5
>co
On Fri, 26 Oct 2012 10:36:38 +0200, Pau Peris wrote:
> As my HD's are on raid 0 mode i use a custom initrd file in order to be
> able to boot. While kernel 2.6 is able to boot without problems the new
> 3.5 compiled kernel fails to boot complaining about "no block devices
> found". After taking a
Hi,
i'm running GNU/Gentoo Linux with a custom compiled kernel and i've just
migrated from a 2.6 kernel to a 3.5.
As my HD's are on raid 0 mode i use a custom initrd file in order to be
able to boot. While kernel 2.6 is able to boot without problems the new 3.5
compiled kernel fails to boot com
On 2011-07-30 03:04, james wrote:
> Ok so my first issue is the installation media
> and a lack of tools for GPT (GUID Partition Table).
> the "4k block" (GPT) issue? Maybe I missed it
> on the minimal CD?
If you're after GPT-able partition software you can use (g)parted,
available on the Gen
Ok so my first issue is the installation media
and a lack of tools for GPT (GUID Partition Table).
On the minimal.iso [1] I see in sbin:
cfdisk fdisk mac-fdisk pmac-fdisk sfdisk
Perhaps another installation media that makes
setting up the identical (raid) drives that
have the "4K block iss
On Thu, Mar 31, 2011 at 2:46 PM, James wrote:
>
>
> Hello,
>
> I'm about to install a dual HD (mirrored) gentoo
> software raid system, with BTRFS. Suggestion,
> guides and documents to reference are all welcome.
>
> I have this link, which is down as the best example:
> http://en.gentoo-wiki.com
On Thu, Mar 31, 2011 at 12:46 PM, James wrote:
>
>
> Hello,
>
> I'm about to install a dual HD (mirrored) gentoo
> software raid system, with BTRFS. Suggestion,
> guides and documents to reference are all welcome.
>
> I have this link, which is down as the best example:
> http://en.gentoo-wiki.com
Hello,
I'm about to install a dual HD (mirrored) gentoo
software raid system, with BTRFS. Suggestion,
guides and documents to reference are all welcome.
I have this link, which is down as the best example:
http://en.gentoo-wiki.com/wiki/RAID/Software
Additionally, I have these links for a gui
I have a newish high-end machine here that's causing me some problems
with RAID, but looking at log files and dmesg I don't think the
problem is actually RAID and more likely udev. I'm looking for some
ideas on how to debug this.
The hardware:
Asus Rampage II Extreme
Intel Core i7-980x
12GB DRAM
5
On Sat, Apr 17, 2010 at 12:00 PM, David Mehler wrote:
> Hello,
> I've got a new gentoo box with two drives that i'm using raid1 on. On
> boot the md raid autodetection is failing. Here's the error i'm
> getting:
>
>
> I've booted with a live CD and checked the arrays they look good, i'm
> not sur
On Samstag 17 April 2010, David Mehler wrote:
> Hello,
> I've got a new gentoo box with two drives that i'm using raid1 on. On
> boot the md raid autodetection is failing. Here's the error i'm
> getting:
>
> md: Waiting for all devices to be available before autodetect
> md: If you don't use raid,
Hello,
I've got a new gentoo box with two drives that i'm using raid1 on. On
boot the md raid autodetection is failing. Here's the error i'm
getting:
md: Waiting for all devices to be available before autodetect
md: If you don't use raid, use raid=noautodetect
md: Autodetecting RAID arrays.
md: Sc
On Mon, Mar 22, 2010 at 8:51 AM, Paul Hartman
wrote:
> On Sun, Mar 21, 2010 at 7:12 AM, KH wrote:
>> Am 20.03.2010 19:26, schrieb Mark Knecht:
>> [...]
>>>
>>> So the chassis and drives for this 1st machine are on order. 6 1TB
>>> green drives. []
>>> - Mark
>>>
>>
>> Hi Mark,
>>
>> What do you m
On Sun, Mar 21, 2010 at 7:12 AM, KH wrote:
> Am 20.03.2010 19:26, schrieb Mark Knecht:
> [...]
>>
>> So the chassis and drives for this 1st machine are on order. 6 1TB
>> green drives. []
>> - Mark
>>
>
> Hi Mark,
>
> What do you mean by "green drives"? I had been told - but never searched for
> c
Am 20.03.2010 19:26, schrieb Mark Knecht:
> On Sat, Mar 20, 2010 at 9:38 AM, KH wrote:
>> Mark Knecht schrieb:
>>>
>>
> :-) Yeah.. Well, keeping my wife's data safe
> keeps me happy. :-)
>
> So the chassis and drives for this 1st machine are on order. 6 1TB
> green drives. Now I just need to dec
Am 20.03.2010 19:29, schrieb Mark Knecht:
[...]
I'm thinking I'll keep it as simple as possibly and just spread out
the Gentoo install over the multiple hard drives without using RAID,
but maybe not. It would be nice to have everything on RAID but I don't
know if I should byte that off for my fi
Am 20.03.2010 19:26, schrieb Mark Knecht:
[...]
So the chassis and drives for this 1st machine are on order. 6 1TB
green drives. []
- Mark
Hi Mark,
What do you mean by "green drives"? I had been told - but never searched
for confirmation - that those energy saving drives change spinning and
On Sat, Mar 20, 2010 at 6:22 AM, Florian Philipp
wrote:
> Am 19.03.2010 23:40, schrieb Mark Knecht:
> [...]
>>
>> The LVM Install doc is pretty clear about not putting these in LVM:
>>
>> /etc, /lib, /mnt, /proc, /sbin, /dev, and /root
>>
>
> /boot shouldn't be there, either. Not sure about /bi
On Sat, Mar 20, 2010 at 9:38 AM, KH wrote:
> Mark Knecht schrieb:
>>
>> Hi,
>
> [...]
>>
>> 3) Wife's new desktop
>
> [...]
>>
>> I want high reliability
>
> [...]
>>
>> The most important task of this machine is to keep data safe.
>
> [...]
>>
>> Thanks,
>> Mark
>>
>
> Hi Mark,
>
> For me it soun
Mark Knecht schrieb:
Hi,
[...]
3) Wife's new desktop
[...]
I want high reliability
[...]
The most important task of this machine is to keep data safe.
[...]
Thanks,
Mark
Hi Mark,
For me it sounds like those points just don't fit together ;-)
Regards
kh
Am 19.03.2010 23:40, schrieb Mark Knecht:
[...]
>
>The LVM Install doc is pretty clear about not putting these in LVM:
>
> /etc, /lib, /mnt, /proc, /sbin, /dev, and /root
>
/boot shouldn't be there, either. Not sure about /bin
> which seems sensible. From an install point of view I'm wonde
Hi,
I'm starting to put together a server-type machine with multiple purposes:
1) MythTV server
2) General backups for another fast machine I'm going to build
3) Wife's new desktop
Since all of these requirements are pretty modest but I want high
reliability I'd like to do software RAID wit
On Monday 01 February 2010 12:58:49 J. Roeleveld wrote:
> Hi All,
>
> I am currently installing a new server and am using Linux software raid to
> merge 6 * 1.5TB drives in a RAID5 configuration.
>
> Creating the RAID5 takes over 20 hours (according to " cat /proc/mdstat ")
>
> Is there a way th
>> It would be interesting to know whether hardware RAID would behave any
>> differently or allow the sync to perform in the background. I have
>> only 1.5TB in RAID5 across 4 x 500gb drives at present; IIRC the
>> expansion from 3 x drives took some hours, but I can't recall the
>> initial setup.
On Monday 01 February 2010 14:20:28 Stroller wrote:
> On 1 Feb 2010, at 11:58, J. Roeleveld wrote:
> > ...
> > I am currently installing a new server and am using Linux software
> > raid to
> > merge 6 * 1.5TB drives in a RAID5 configuration.
> >
> > Creating the RAID5 takes over 20 hours (accordin
On 1 Feb 2010, at 11:58, J. Roeleveld wrote:
...
I am currently installing a new server and am using Linux software
raid to
merge 6 * 1.5TB drives in a RAID5 configuration.
Creating the RAID5 takes over 20 hours (according to " cat /proc/
mdstat ")
Is there a way that will speed this up?
1 - 100 of 190 matches
Mail list logo