Re: Debian HTTPS mirrors

2008-12-24 Thread Ben Scott
On Wed, Dec 24, 2008 at 1:28 PM, Thomas Charron  wrote:
>  Thanks, but in this case, the checkpoint firewall is working
> 'correctly'.  When that is tried, I end up with a pretty brief
> 'Checkpoint denied.  Bugger off'.  :-D

  Ahhh.  Sophistication in filtering.  What a concept.  ;-)  I don't
suppose it's worth asking The Powers That Be for outbound HTTP and/or
FTP for this device, since it's not a danger to this device?  One
occasionally encounters network nazis who are willing to listen to
valid reasons.  (I have to keep the network at $JOB fairly strongly
locked down for all the usual reasons, but if someone has a business
reason for something, I'm generally willing to work with them.  It's
only the people who just want to waste company resources that I say
"no" to.)

  Hmmm, I wonder if you configured both ends to use a null SSL cipher,
if the Checkpoint would be happy if you used SSL without any actual
security.  (Whether this is a good idea or not is left for the reader
to decide; at this point my speculation has become more an exercise in
curiosity.  It would save CPU cycles, though.)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Debian HTTPS mirrors

2008-12-24 Thread Thomas Charron
On Wed, Dec 24, 2008 at 12:58 PM, Ben Scott  wrote:
> On Wed, Dec 24, 2008 at 12:48 PM, Thomas Charron  wrote:
>> Actually, the machine I'm attempting to build a debian system on in
>> order to do embedded diab builds only has access to https.
>  In my experience, there's a good chance it actually has access to
> more than just HTTP-over-SSL: It likely has access to anything on
> TCP/443.  So run a regular HTTP listener on port 443, and save
> yourself some CPU burden.  :)
>  (Some filters actually verify protocol, not just TCP port number,
> but they're in the minority.)

  Thanks, but in this case, the checkpoint firewall is working
'correctly'.  When that is tried, I end up with a pretty brief
'Checkpoint denied.  Bugger off'.  :-D

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Debian HTTPS mirrors

2008-12-24 Thread Ben Scott
On Wed, Dec 24, 2008 at 12:48 PM, Thomas Charron  wrote:
> Actually, the machine I'm attempting to build a debian system on in
> order to do embedded diab builds only has access to https.

  In my experience, there's a good chance it actually has access to
more than just HTTP-over-SSL: It likely has access to anything on
TCP/443.  So run a regular HTTP listener on port 443, and save
yourself some CPU burden.  :)

  (Some filters actually verify protocol, not just TCP port number,
but they're in the minority.)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Debian HTTPS mirrors

2008-12-24 Thread Thomas Charron
On Wed, Dec 24, 2008 at 12:01 PM, Drew Van Zandt
 wrote:
> If it's absolutely necessary for some reason that you verify stuff at the
> last step, run your own private mirror that does a normal download, then
> verifies before it will serve to your clients.

  Actually, the machine I'm attempting to build a debian system on in
order to do embedded diab builds only has access to https.  I will
more then likely end up doing a mirror at home over the weekend and
serving it over https.  Was just hoping that perhps one of the 'big
boy' providers like pair had already set one up.

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Debian HTTPS mirrors

2008-12-24 Thread Drew Van Zandt
If it's absolutely necessary for some reason that you verify stuff at the
last step, run your own private mirror that does a normal download, then
verifies before it will serve to your clients.

--DTVZ

On Wed, Dec 24, 2008 at 11:57 AM, Ben Scott  wrote:

> On Wed, Dec 24, 2008 at 11:41 AM, Thomas Charron 
> wrote:
> >  No luck finding any searching, anyone know if there are any debian
> > mirror sites which can serve over https?
>
>  Given the computational expense involved in encrypting such a large
> payload, I would expect such to be rare and short-lived.  It's
> generally seen as more efficient to verify at the end-point, rather
> than trying to keep the entire distribution chain secure.  My
> understanding is that Debian packages include GPG signatures and MD5
> checksums, which APT checks.  May I ask why that is not sufficient to
> verify integrity and authenticity?
>
> -- Ben
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Debian HTTPS mirrors

2008-12-24 Thread Ben Scott
On Wed, Dec 24, 2008 at 11:41 AM, Thomas Charron  wrote:
>  No luck finding any searching, anyone know if there are any debian
> mirror sites which can serve over https?

 Given the computational expense involved in encrypting such a large
payload, I would expect such to be rare and short-lived.  It's
generally seen as more efficient to verify at the end-point, rather
than trying to keep the entire distribution chain secure.  My
understanding is that Debian packages include GPG signatures and MD5
checksums, which APT checks.  May I ask why that is not sufficient to
verify integrity and authenticity?

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Debian HTTPS mirrors

2008-12-24 Thread Thomas Charron
  No luck finding any searching, anyone know if there are any debian
mirror sites which can serve over https?

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-30 Thread Derek Atkins
Mark Komarinski <[EMAIL PROTECTED]> writes:

>> Wait, are you getting 20MB/s via NFS?  Or 20MB/s via Samba?
>> I'll note that 20MB/s is 160mbps, which is only about 23% of
>> the usable bandwidth of GigE.
>>   
> Just because the wire supports up to 1Gb, it doesn't mean that the whole
> stack of:
>
> disk
> memory
> CPU
> OS
> TCP/IP stack
> network drivers
> network card
> network switch
>
> will actually be able to push that much data.  My untuned testing on a
> 2.4 kernel a few years ago gave us about 300-500Mbps on a GigE system. 
> Tuning the kernel memory (and using 2.6?) can get you closer to 800Mbps,
> but that really relies on the amount of data you're sending.
>
> In case you're interested, using jumbo packets (9000MTU) doesn't
> necessarily improve network speed, but it does greatly reduce the CPU
> load on the sender and receiver.

I'll point out for the record that my 23% number above is:

   160/.23 == 695

I.e., I used a limit of 700mbps on a GigE.

> -Mark

-derek

-- 
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
   [EMAIL PROTECTED]PGP key available
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-23 Thread Mark Komarinski
On 04/20/2007 10:28 AM, Derek Atkins wrote:
> "Tom Buskey" <[EMAIL PROTECTED]> writes:
>
>   
>> I'm getting about 20 MB/s writes over gigabit ethernet to an
>> NFS/Samba server.
>> 
>
> Wait, are you getting 20MB/s via NFS?  Or 20MB/s via Samba?
> I'll note that 20MB/s is 160mbps, which is only about 23% of
> the usable bandwidth of GigE.
>   
Just because the wire supports up to 1Gb, it doesn't mean that the whole
stack of:

disk
memory
CPU
OS
TCP/IP stack
network drivers
network card
network switch

will actually be able to push that much data.  My untuned testing on a
2.4 kernel a few years ago gave us about 300-500Mbps on a GigE system. 
Tuning the kernel memory (and using 2.6?) can get you closer to 800Mbps,
but that really relies on the amount of data you're sending.

In case you're interested, using jumbo packets (9000MTU) doesn't
necessarily improve network speed, but it does greatly reduce the CPU
load on the sender and receiver.

-Mark
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-20 Thread Tom Buskey

On 4/20/07, Derek Atkins <[EMAIL PROTECTED]> wrote:


Quoting Tom Buskey <[EMAIL PROTECTED]>:

> Sorry for top posting.  The gmail blackberry agent only let's me type
> above the original.
>
> Yes, I get that in linux/solaris via NFS and cygwin via samba.
>
> I haven' tuned but I don't think I'm way off.  I've read that 60 MB/s
> for backups over GigE is about the limit.  I'm using cheap gear.  I've
> measured 30 MB/s at work with cisco switches and a high end SAN.
>
> The compaq delivered only 1-2 MB/s slower over the net vs local as well.
>
> I need to do some tuning, but I don't think I'd get above 30 MB/s on
> the solaris box.

So it's a Solaris server and a linux client and you see 20MB/s over
NFS?   For what size files?

Have you tried a Linux NFS server?




I have 4 machines.  1 Solaris 10 NFS/Samba server.  1 Fedora 5 NFS/Samba
server, 1 ubuntu laptop, 1 Windows XP desktop with Cygwin.  All on gigabit.

I've been doing this on the client: dd if=/dev/zero bs=1048576 count=1024
of=
I've tried all 4 as clients to either of the servers via NFS and Samba.
The laptop client is slower, but it's a PCMCIA gigabit ethernet so I'd
expect it.



-derek


--
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
   [EMAIL PROTECTED]PGP key available


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-20 Thread Derek Atkins

Quoting Tom Buskey <[EMAIL PROTECTED]>:


Sorry for top posting.  The gmail blackberry agent only let's me type
above the original.

Yes, I get that in linux/solaris via NFS and cygwin via samba.

I haven' tuned but I don't think I'm way off.  I've read that 60 MB/s
for backups over GigE is about the limit.  I'm using cheap gear.  I've
measured 30 MB/s at work with cisco switches and a high end SAN.

The compaq delivered only 1-2 MB/s slower over the net vs local as well.

I need to do some tuning, but I don't think I'd get above 30 MB/s on
the solaris box.


So it's a Solaris server and a linux client and you see 20MB/s over
NFS?   For what size files?

Have you tried a Linux NFS server?

-derek

--
  Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
  Member, MIT Student Information Processing Board  (SIPB)
  URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
  [EMAIL PROTECTED]PGP key available

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-20 Thread Tom Buskey

Sorry for top posting.  The gmail blackberry agent only let's me type
above the original.

Yes, I get that in linux/solaris via NFS and cygwin via samba.

I haven' tuned but I don't think I'm way off.  I've read that 60 MB/s
for backups over GigE is about the limit.  I'm using cheap gear.  I've
measured 30 MB/s at work with cisco switches and a high end SAN.

The compaq delivered only 1-2 MB/s slower over the net vs local as well.

I need to do some tuning, but I don't think I'd get above 30 MB/s on
the solaris box.

On 4/20/07, Derek Atkins <[EMAIL PROTECTED]> wrote:

"Tom Buskey" <[EMAIL PROTECTED]> writes:

> I'm getting about 20 MB/s writes over gigabit ethernet to an
> NFS/Samba server.

Wait, are you getting 20MB/s via NFS?  Or 20MB/s via Samba?
I'll note that 20MB/s is 160mbps, which is only about 23% of
the usable bandwidth of GigE.

-derek

--
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
   [EMAIL PROTECTED]PGP key available


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-20 Thread Derek Atkins
"Tom Buskey" <[EMAIL PROTECTED]> writes:

> I'm getting about 20 MB/s writes over gigabit ethernet to an
> NFS/Samba server.

Wait, are you getting 20MB/s via NFS?  Or 20MB/s via Samba?
I'll note that 20MB/s is 160mbps, which is only about 23% of
the usable bandwidth of GigE.

-derek

-- 
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
   [EMAIL PROTECTED]PGP key available
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-19 Thread Tom Buskey

On 4/19/07, Bill McGonigle <[EMAIL PROTECTED]> wrote:


On Apr 18, 2007, at 10:35, Thomas Charron wrote:

>   You're right, I'm
> suprised there isn't more on this online.  It's a GREAT idea to mirror
> a local drive over an external mass storage drive.

It's worth noting that I only usually achieve about a real 14MB/s on
the USB drives.  Firewire gets about 24MB/s, but I had to pull that
card when I got a new mobo to fit an e.SATA card in (fewer slots).



I'm getting about 20 MB/s writes over gigabit ethernet to an NFS/Samba
server.

1 system is a Compaq 1850r dual PIII 500MHz 512MB w/ a $20 4 port SATA I
card and 3 120GB drives in RAID 5 on a Fedora 5 system.  Local disk is about
21 MB/s.  Lowend stuff.

The other system is an AMD Dual Core (165?) 1GB RAM with builtin SATA II.  I
have 4 500GB Seagates running Solaris 10 with ZFS RAID-Z.  Local disk is 60
MB/s uncompressed.  I did a dd if=/dev/zero onto a compressed ZFS area and
got 173 MB/s with 30% compression.  Gigabit ethernet is around 20 MB/s.

I have not done any tuning, but this beats your USB drives and isn't far
from Firewire over ethernet.  Local disk on the Solaris system really beats
it.

It's also worth noting that ZFS does all of this automatically

already, including the snapshots (instantly), so I'll probably try
moving the whole backup system to ZFS when I get another machine for
a Nexenta (Ubuntu on an OpenSolaris kernel) box.



ZFS is very cool as I've said before.  You have to tell it to snapshot or
clone, but there are scripts that can automate it.  I think it could be
comparable to NetApp's multiple simultaneous snapshotting.

The big hit with Solaris is support for the SATA card.  Not many are
confirmed.  It doesn't like RAID in the BIOS, but the cards need BIOS.  My
$20 card didn't work but others have been able to reflash.  I bricked mine.

I ended up copying someone's recipe for my server.  Including data disks was
under $2k from Newegg.

If you want a very solid production server, go with Solaris 10u3.  The
desktop isn't as complete as Linux but that doesn't matter so much on a
server.  If you want experimental, OpenSolaris based might work better as a
desktop.  If you want iSCSI target, that's in OpenSolaris.  Solaris 10 u4
should have it when it comes out in June.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-19 Thread Bill McGonigle

On Apr 18, 2007, at 10:35, Thomas Charron wrote:


 Hrm, apperently, this is all gone in 2.6, sorry for pointing the
wrong way.


No problem.  I'll see if I can find some discussion about its  
progress and removal, that may be enlightening.



  The entire md.c was split up into it's own directory.  I
wonder if there is a way to permanently attach a given drive to a RAID
array, so it remains there if it's present or not.


I've found I have to --fail and --remove a drive to get it out  
cleanly, so my guess is 'no', but many mdadm tricks seem to be  
lightly documented, so I'd love to learn there was another way.   
Dedicated RAID hardware often allows marking of a RAID-1 drive as not  
participating, so this task is a matter of pulling a hotplug slot and  
replacing it.  UUID's or whatever they use just do the right thing  
automagically, but like you say it still has the concept of that  
drive participating, even if it's 'failed'.  Pre-2.6 I tried just  
marking drives 'failed' and pulling the hardware but _bad things_  
happened when I did that.  I haven't yet had the temerity to try that
on 2.6, but I will do that when I get another pair of 320MB drives in  
from NewEgg ($75!) in the next couple weeks (so I have two sets  
offsite, for better disaster recovery odds).



  You're right, I'm
suprised there isn't more on this online.  It's a GREAT idea to mirror
a local drive over an external mass storage drive.


It's worth noting that I only usually achieve about a real 14MB/s on  
the USB drives.  Firewire gets about 24MB/s, but I had to pull that  
card when I got a new mobo to fit an e.SATA card in (fewer slots).   
Either way, there's a big write performance hit, so I wouldn't do USB  
for mirroring a main drive.  In this case, I have 4 drives for  
backup, two as the permanent members, two as removable.  That way I  
can just rsnapshot the main drives to the backup pair (RAID-0)  
overnight and then pull 1 RAID-1 member of each backup pair for off- 
site storage.


The e.SATA drives should be much better, speed wise, when I get the  
latest kernel installed (for chipset support), so the above problem  
shouldn't exist for e.SATA, but there still is the problem of having  
only 2 ports on an e.SATA card while my mobo has 4 USB ports and 4  
more stubbed out on the mobo as headers.  I got a pair of these:


  http://cwc-group.com/duusbfeto2xi.html

to stub out four more USB connections to handle the backup set. Very  
handy since my case had two more expansion slot openings than my new  
mobo had.


I could mirror my main backup drives over e.SATA, but in this case  
this machine is handling backup for several servers (via rsnapshot),  
so while I could just partition off enough space to do a 3- (or 4!)- 
way mirror of my main drives, the versioning that rsnapshot gives me  
is worth the trade-off.   The e.SATA ports are marked for holding my  
mythbackend data.


It's also worth noting that ZFS does all of this automatically  
already, including the snapshots (instantly), so I'll probably try  
moving the whole backup system to ZFS when I get another machine for  
a Nexenta (Ubuntu on an OpenSolaris kernel) box.


-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
New Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-19 Thread Bill McGonigle

On Apr 18, 2007, at 11:12, Derek Atkins wrote:


I used to use the attached script.  Note that it USED to work,
but I haven't tested it on recent systems.  I originally wrote
it on RHL9, and I think I updated it for FC1, but I dont think
I've tested it more recently than that, so YMMV.


Cool, looks like it might need updating for mdadm.conf rather than  
raidtab but it should be a good start.  Ideally neither would be  
required but I haven't yet figured out the part about reading the  
RAID header from member disks for autodiscovery.  I'll post back here  
when I get something working.



This script assumes RAID1 and matching drives (not raid 10).  It also
assumes IDE, not USB or SATA.


Should be easy enough to handle the virtual SCSI stuff anyway (my USB  
drives pretend to be SCSI).  For RAID-10, that's RAID-0 over RAID-1  
members, so just getting the RAID-1 drives online is all I really  
need, the RAID-0 aspect should be running already.  I'll make sure it  
ignores RAID-0 for the issue of reconstruction, as that's largely  
meaningless.  I've been sufficiently warned away from RAID-5 not to  
need it personally.



But hopefully it'll at least give you some clues if not a working
rebuild script..

It's in PERL.


Fantastic.  I'll poke around CPAN and see if anybody has released  
modules for any of the above tasks.


-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
New Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-18 Thread Derek Atkins
I used to use the attached script.  Note that it USED to work,
but I haven't tested it on recent systems.  I originally wrote
it on RHL9, and I think I updated it for FC1, but I dont think
I've tested it more recently than that, so YMMV.

This script assumes RAID1 and matching drives (not raid 10).  It also
assumes IDE, not USB or SATA.

But hopefully it'll at least give you some clues if not a working
rebuild script..

It's in PERL.

-derek

Bill McGonigle <[EMAIL PROTECTED]> writes:

> My Google-fu may be weak, but I'm not finding much on automatically  
> rebuilding RAID arrays under linux.
>
> Here's the scenario:  I have a RAID-10 stack of disks I use for  
> backup.  All are USB, two are fixed, two are in slide-out trays.  I  
> have two sets of the ones in the slide out trays.  To do a backup, I  
> fail out the removable disks, turn off the cases, pull the drives,  
> drive them over to the bank, and swap the set.  I bring the other two  
> back, swap them in, turn on the cases, and then do:
>
>   cat /proc/mdstat
>
> to figure out the RAID devices again,
>
>   dmesg|tail
>
> to see which drive letters the disks got assigned, then, e.g.:
>
>   mdadm --manage --add /dev/md7 /dev/sdh1
>
> for each drive and then they're synced with the current mirror and  
> life goes on happily.
>
> But I'm Lazy.  The drives were previously part of the arrays, so I  
> can query the UUID info on each array and each drive with an 'fd'  
> type by parsing /proc entries and using:
>
>   mdadm --examine
>
> I'm pretty confident that I can write a hotplug script (or something)  
> to watch for new devices and do The Right Thing, at least with RAID-1  
> devices (I haven't thought enough about other RAID types).
>
> So, gentle reader, if it's so darn easy, why:
> * already in the kernel or hotplug scripts
> * isn't it automatic
>
> I'm just having trouble with the premise that nobody's tried this  
> yet, as it doesn't seem all that awful hard and it does seem awful  
> useful.  So, usually that means I'm missing something (often obvious)  
> and that it's a Bad Idea.  But I haven't figure out yet what that  
> might be.  Criticisms required.
>
> Sorry for the on-topic post. ;)
>
> -Bill



raid-rebuild.pl
Description: Raid Rebuild Script

-- 
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
   [EMAIL PROTECTED]PGP key available
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-18 Thread Thomas Charron

On 4/18/07, Thomas Charron <[EMAIL PROTECTED]> wrote:

On 4/17/07, Bill McGonigle <[EMAIL PROTECTED]> wrote:
> I'm just having trouble with the premise that nobody's tried this
> yet, as it doesn't seem all that awful hard and it does seem awful
> useful.  So, usually that means I'm missing something (often obvious)
> and that it's a Bad Idea.  But I haven't figure out yet what that
> might be.  Criticisms required.
> Sorry for the on-topic post. ;)
 I recall doing some poking around to something simular to this, and
the best I recall is SUPPORT_RECONSTRUCTION being built into the
kernel.  This was SEVERAL years ago, I have no idea what the current
state of this kernel capability is.


 Hrm, apperently, this is all gone in 2.6, sorry for pointing the
wrong way.  The entire md.c was split up into it's own directory.  I
wonder if there is a way to permanently attach a given drive to a RAID
array, so it remains there if it's present or not.  You're right, I'm
suprised there isn't more on this online.  It's a GREAT idea to mirror
a local drive over an external mass storage drive.

--
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: auto-rebuild RAID mirrors

2007-04-18 Thread Thomas Charron

On 4/17/07, Bill McGonigle <[EMAIL PROTECTED]> wrote:

I'm just having trouble with the premise that nobody's tried this
yet, as it doesn't seem all that awful hard and it does seem awful
useful.  So, usually that means I'm missing something (often obvious)
and that it's a Bad Idea.  But I haven't figure out yet what that
might be.  Criticisms required.
Sorry for the on-topic post. ;)


 I recall doing some poking around to something simular to this, and
the best I recall is SUPPORT_RECONSTRUCTION being built into the
kernel.  This was SEVERAL years ago, I have no idea what the current
state of this kernel capability is.

 Basically, what this allows is for given drives to be defined, and
allows the system to automatically reconstruct the array.  The purpose
for this is to allow for spare drive situations, where the spare drive
would be used to recreate the RAID array after failure, but I suppose
could also be configured in such of a way to automatically reconstruct
the array onto an existing disk, but it's a guess.

--
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


auto-rebuild RAID mirrors

2007-04-17 Thread Bill McGonigle
My Google-fu may be weak, but I'm not finding much on automatically  
rebuilding RAID arrays under linux.


Here's the scenario:  I have a RAID-10 stack of disks I use for  
backup.  All are USB, two are fixed, two are in slide-out trays.  I  
have two sets of the ones in the slide out trays.  To do a backup, I  
fail out the removable disks, turn off the cases, pull the drives,  
drive them over to the bank, and swap the set.  I bring the other two  
back, swap them in, turn on the cases, and then do:


  cat /proc/mdstat

to figure out the RAID devices again,

  dmesg|tail

to see which drive letters the disks got assigned, then, e.g.:

  mdadm --manage --add /dev/md7 /dev/sdh1

for each drive and then they're synced with the current mirror and  
life goes on happily.


But I'm Lazy.  The drives were previously part of the arrays, so I  
can query the UUID info on each array and each drive with an 'fd'  
type by parsing /proc entries and using:


  mdadm --examine

I'm pretty confident that I can write a hotplug script (or something)  
to watch for new devices and do The Right Thing, at least with RAID-1  
devices (I haven't thought enough about other RAID types).


So, gentle reader, if it's so darn easy, why:
* already in the kernel or hotplug scripts
* isn't it automatic

I'm just having trouble with the premise that nobody's tried this  
yet, as it doesn't seem all that awful hard and it does seem awful  
useful.  So, usually that means I'm missing something (often obvious)  
and that it's a Bad Idea.  But I haven't figure out yet what that  
might be.  Criticisms required.


Sorry for the on-topic post. ;)

-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
New Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Mirrors...

2004-03-26 Thread Travis Roy
WOPS!

That was only suppose to go to Ben

SORRY! :)

Travis Roy wrote:

[EMAIL PROTECTED] wrote:

On Fri, 26 Mar 2004, at 11:30am, [EMAIL PROTECTED] wrote:

If you have anything you would like me to add please let me know and 
if I
have room I'll put it up.


  How about a copy of the Win32 source?  ;-)

How about how to turn a bunch of XBoxes into a render farm..

WTF was that argument about... I only heard the Mike side.
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Mirrors...

2004-03-26 Thread Travis Roy
[EMAIL PROTECTED] wrote:

On Fri, 26 Mar 2004, at 11:30am, [EMAIL PROTECTED] wrote:

If you have anything you would like me to add please let me know and if I
have room I'll put it up.


  How about a copy of the Win32 source?  ;-)

How about how to turn a bunch of XBoxes into a render farm..

WTF was that argument about... I only heard the Mike side.
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Mirrors...

2004-03-26 Thread bscott
On Fri, 26 Mar 2004, at 11:30am, [EMAIL PROTECTED] wrote:
> If you have anything you would like me to add please let me know and if I
> have room I'll put it up.

  How about a copy of the Win32 source?  ;-)

-- 
Ben Scott <[EMAIL PROTECTED]>
| The opinions expressed in this message are those of the author and do  |
| not represent the views or policy of any other person or organization. |
| All information is provided without warranty of any kind.  |

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Mirrors...

2004-03-26 Thread Travis Roy
I've been mirroring some stuff on my server that I colo with 
Colospace.com (my new employer). I figured I would share. :)

If you have anything you would like me to add please let me know and if 
I have room I'll put it up.

http://scootz.net/~mirrors/

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss