RE: Three Terabyte
-BEGIN PGP SIGNED MESSAGE- ~ On 27-Mar-2003, Brent Wiese wrote message "RE: Three Terabyte" ~ > Normally, I'd also agree with this. However, a friend of mine built a NAS > using the 3ware card and 11 200gb WD drives in a RAID5 config and can > sustain 85mbit/s *write* (the test was several hours long). I suspect it > would do even more with a gig-E card. > > Of course, that test would be fairly meaningless when you're doing something > like a mail spool, but it proves the application should drive the method. Yes, true. If capacity is more important, then obviously raid10 isn't the best choice. Let me know if that NAS needs a new home. ;) ~~ Andy Harrison [EMAIL PROTECTED] ICQ: 123472 AIM/Y!: AHinMaine homepage: http://www.nachoz.com -BEGIN PGP SIGNATURE- Version: PGP 6.5.8 iQCVAwUBPoM3OlPEkLgodAWVAQF87wQAhTFDrk44gnLTW9AbQ/WOp4wlFm3uE4Et ZXc2vidC2z0eNTU+ugSKUEhXr6up/hb1kdLIwwphR+/X6ygwbm3IfLGNzbsJQ2vj vwsKof9NiL4g3nphiHY3ecqSLjYJYpRK5OS51pL4gE26JUI1yD1U4IBA5F6WYuDN /BCpeZ2Nou8= =xMJ3 -END PGP SIGNATURE- ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
RE: Three Terabyte
> ~ > On 27-Mar-2003, Francisco J Reyes wrote message "Re: Three > Terabyte" > ~ > > Highly recommend you go with Raid 10 and not 5. > > > I 2nd that. Raid 5 offers very very POOR performance. While > it sucks up the most diskspace, Raid 10 is maximum > performance and great fault tolerance. For an i/o intensive > service like a mail server or something, raid 5 will > eventually cause your server to get crushed over time as the > number of users increases. The you're forced to convert to > raid 10. We learnt this the hard way. ;) > Normally, I'd also agree with this. However, a friend of mine built a NAS using the 3ware card and 11 200gb WD drives in a RAID5 config and can sustain 85mbit/s *write* (the test was several hours long). I suspect it would do even more with a gig-E card. Of course, that test would be fairly meaningless when you're doing something like a mail spool, but it proves the application should drive the method. Brent ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Three Terabyte
-BEGIN PGP SIGNED MESSAGE- ~ On 27-Mar-2003, Francisco J Reyes wrote message "Re: Three Terabyte" ~ > Highly recommend you go with Raid 10 and not 5. I 2nd that. Raid 5 offers very very POOR performance. While it sucks up the most diskspace, Raid 10 is maximum performance and great fault tolerance. For an i/o intensive service like a mail server or something, raid 5 will eventually cause your server to get crushed over time as the number of users increases. The you're forced to convert to raid 10. We learnt this the hard way. ;) ~~ Andy Harrison [EMAIL PROTECTED] ICQ: 123472 AIM/Y!: AHinMaine homepage: http://www.nachoz.com -BEGIN PGP SIGNATURE- Version: PGP 6.5.8 iQCVAwUBPoJ7C1PEkLgodAWVAQFDLAP+NAfyWfYOsbZkNcF6Q8v269IHdkIvxDnz 6tO7XBfbeq7EuMoLUT7/PEM7BqR8dMJH9Z/U78O2WcjDUR4Nl0qkMbTE75YZc5zN 1/iT7NYSG/+lqa1kJRggu4BvrHtmwHesySbxnNm3ong0CyTIKdVPtxuWj0zozRFW k0eFqJ9LwYE= =pk3c -END PGP SIGNATURE- ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Three Terabyte
On Thu, 20 Mar 2003, Maarten de Vries wrote: > On Thu, 20 Mar 2003, Dirk-Willem van Gulik wrote: > > > Depends on what access patterns you have; is it mostly dormant > > archiving; or lots of access, concurrent, sequential ? How safe does the > > data need to be; and against what (hardware failure, accidental rm -rf). > > This would be for backup. Data on about 50 webservers would be backed up > to it on a nightly basis. So performance wouldn't be important. Highly recommend you go with Raid 10 and not 5. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Three Terabyte
On Friday, 21 March 2003 at 12:57:27 +0100, Alexander Haderer wrote: > At 10:26 21.03.2003 +1030, Greg 'groggy' Lehey wrote: >> On Thursday, 20 March 2003 at 13:13:18 +0100, Alexander Haderer wrote: >>> At 12:53 20.03.2003 +0100, Maarten de Vries wrote: This would be for backup. Data on about 50 webservers would be backed up to it on a nightly basis. So performance wouldn't be important. >>> >>> Sure? Consider this: >>> >>> a. >>> Filling 3TB with 1 Mbyte/s lasts more than 800 hours or 33 days. >> >> I do a nightly backup to disk. It's compressed (gzip), which is the >> bottleneck. I get this sort of performance: >> >> dump -2uf - /home | gzip > /dump/wantadilla/2/home.gz >> ... >> DUMP: DUMP: 1254971 tape blocks >> DUMP: finished in 217 seconds, throughput 5783 KBytes/sec >> DUMP: level 2 dump on Thu Mar 20 21:01:31 2003 >> >> You don't normally fill up a backup disk at once, so this would be >> perfectly adequate. I'd expect a system of the kind that Maarten's >> talking about to be able to transfer at least 40 MB/s sequential at >> the disk. That would mean he could backup over 1 TB in an 8 hour >> period. > > Of course you are right. My note a. was meant as a more general hint to > think about transfer rates when dealing with large files/filesystem. > Maarten gave no details about how the webservers are connected with the > backup server. I should have give more details of what I mean: When backing > up 50 Webservers over network to one backup server the network may become a > bottleneck. If you have to use encrypted connections (ssh) because the > webservers are located elsewhere you need CPU power at server side for each > connection. Correct. >>> b. >>> Using ssh + dump/cpio/tar needs CPU power for encryption, especially when >>> multiple clients safe their data at the same time. >> >> You can share the compression across multiple machines. That's what >> was happening in the example above. > > It is a good idea to do compression at the client side. > > As I understand your example /dump/wantadilla/2 is either a local > dir or connected via NFS. The latter requires a local network if you > don't want to do NFS mounts across the Internet. Is this right? Yes. This is just a local network. There's no absolute necessity for NFS, and I certainly wouldn't do it across the Internet. Greg -- See complete headers for address and phone numbers pgp0.pgp Description: PGP signature
Re: Three Terabyte
At 10:26 21.03.2003 +1030, Greg 'groggy' Lehey wrote: On Thursday, 20 March 2003 at 13:13:18 +0100, Alexander Haderer wrote: > At 12:53 20.03.2003 +0100, Maarten de Vries wrote: >> This would be for backup. Data on about 50 webservers would be backed up >> to it on a nightly basis. So performance wouldn't be important. > > Sure? Consider this: > > a. > Filling 3TB with 1 Mbyte/s lasts more than 800 hours or 33 days. I do a nightly backup to disk. It's compressed (gzip), which is the bottleneck. I get this sort of performance: dump -2uf - /home | gzip > /dump/wantadilla/2/home.gz ... DUMP: DUMP: 1254971 tape blocks DUMP: finished in 217 seconds, throughput 5783 KBytes/sec DUMP: level 2 dump on Thu Mar 20 21:01:31 2003 You don't normally fill up a backup disk at once, so this would be perfectly adequate. I'd expect a system of the kind that Maarten's talking about to be able to transfer at least 40 MB/s sequential at the disk. That would mean he could backup over 1 TB in an 8 hour period. Of course you are right. My note a. was meant as a more general hint to think about transfer rates when dealing with large files/filesystem. Maarten gave no details about how the webservers are connected with the backup server. I should have give more details of what I mean: When backing up 50 Webservers over network to one backup server the network may become a bottleneck. If you have to use encrypted connections (ssh) because the webservers are located elsewhere you need CPU power at server side for each connection. > b. > Using ssh + dump/cpio/tar needs CPU power for encryption, especially when > multiple clients safe their data at the same time. You can share the compression across multiple machines. That's what was happening in the example above. It is a good idea to do compression at the client side. As I understand your example /dump/wantadilla/2 is either a local dir or connected via NFS. The latter requires a local network if you don't want to do NFS mounts across the Internet. Is this right? with best regards Alexander -- -- Alexander Haderer Charite Campus Virchow-Klinikum Tel. +49 30 - 450 557 182Strahlenklinik und Poliklinik Fax. +49 30 - 450 557 117Sekr. Prof. Felix Email [EMAIL PROTECTED]Augustenburger Platz 1 www http://www.charite.de/rv/str/ 13353 Berlin - Germany -- To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: Three Terabyte
On Thursday, 20 March 2003 at 13:13:18 +0100, Alexander Haderer wrote: > At 12:53 20.03.2003 +0100, Maarten de Vries wrote: >> On Thu, 20 Mar 2003, Dirk-Willem van Gulik wrote: >> >>> Depends on what access patterns you have; is it mostly dormant >>> archiving; or lots of access, concurrent, sequential ? How safe does the >>> data need to be; and against what (hardware failure, accidental rm -rf). >> >> This would be for backup. Data on about 50 webservers would be backed up >> to it on a nightly basis. So performance wouldn't be important. > > Sure? Consider this: > > a. > Filling 3TB with 1 Mbyte/s lasts more than 800 hours or 33 days. I do a nightly backup to disk. It's compressed (gzip), which is the bottleneck. I get this sort of performance: dump -2uf - /home | gzip > /dump/wantadilla/2/home.gz ... DUMP: DUMP: 1254971 tape blocks DUMP: finished in 217 seconds, throughput 5783 KBytes/sec DUMP: level 2 dump on Thu Mar 20 21:01:31 2003 You don't normally fill up a backup disk at once, so this would be perfectly adequate. I'd expect a system of the kind that Maarten's talking about to be able to transfer at least 40 MB/s sequential at the disk. That would mean he could backup over 1 TB in an 8 hour period. > b. > Using ssh + dump/cpio/tar needs CPU power for encryption, especially when > multiple clients safe their data at the same time. You can share the compression across multiple machines. That's what was happening in the example above. > c. > When using FreeBSD 4.X a fsck after a hard reboot will block the server. > fsck'ing a full 3TB filesystem may need a long time. Its better to use > several smaller file systems. You don't have to fsck at boot time, not even in Release 4. > d. > Wrong parameters for newfs may slowdown large filesystems and waste lots of > space. Before using large filesystems read the manpage of newfs, especially > the topics about options -b -f -i Correct. Check the -m option (free space %) as well. There's no reason to waste 8% of the space. Greg -- See complete headers for address and phone numbers pgp0.pgp Description: PGP signature
Re: Three Terabyte
I have a similar need but I need lots of access and concurrent! On Thu, 20 Mar 2003, Dirk-Willem van Gulik wrote: > > > Let's say I need 3Tb of cheap storage (preferably IDE disks) and I want it > > controlled by a FreeBSD system; how (if at all possible) would I do set that > > up in terms of hard- and software? > > Depends on what access patterns you have; is it mostly dormant archiving; > or lots of access, concurrent, sequential ? How safe does the data need to > be; and against what (hardware failure, accidental rm -rf). > > But check out the 3ware RAID card; I've had great luck with building NFS > servers with 8 or 16 disks as fairly dormant/archival style storage > depots. > > Dw > > > To Unsubscribe: send mail to [EMAIL PROTECTED] > with "unsubscribe freebsd-questions" in the body of the message > To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: Three Terabyte
On Thursday 20 March 2003 13:13, Alexander Haderer wrote: > a. > Filling 3TB with 1 Mbyte/s lasts more than 800 hours or 33 days. > > b. > Using ssh + dump/cpio/tar needs CPU power for encryption, especially > when multiple clients safe their data at the same time. We're already using a system built on Rsync and Dirvish, which is very quick. Disk I/O is not likely to be the bottleneck. > c. > When using FreeBSD 4.X a fsck after a hard reboot will block the > server. fsck'ing a full 3TB filesystem may need a long time. Its better > to use several smaller file systems. I guess I'd opt for FreeBSD 5. > d. > Wrong parameters for newfs may slowdown large filesystems and waste > lots of space. Before using large filesystems read the manpage of > newfs, especially the topics about options -b -f -i Thanks for that tip. -- [EMAIL PROTECTED] - http://unsavoury.net/ natural selection has come home To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: Three Terabyte
At 12:53 20.03.2003 +0100, Maarten de Vries wrote: On Thu, 20 Mar 2003, Dirk-Willem van Gulik wrote: > Depends on what access patterns you have; is it mostly dormant > archiving; or lots of access, concurrent, sequential ? How safe does the > data need to be; and against what (hardware failure, accidental rm -rf). This would be for backup. Data on about 50 webservers would be backed up to it on a nightly basis. So performance wouldn't be important. Sure? Consider this: a. Filling 3TB with 1 Mbyte/s lasts more than 800 hours or 33 days. b. Using ssh + dump/cpio/tar needs CPU power for encryption, especially when multiple clients safe their data at the same time. c. When using FreeBSD 4.X a fsck after a hard reboot will block the server. fsck'ing a full 3TB filesystem may need a long time. Its better to use several smaller file systems. d. Wrong parameters for newfs may slowdown large filesystems and waste lots of space. Before using large filesystems read the manpage of newfs, especially the topics about options -b -f -i with best regards, Alexander -- Alexander Haderer Charite Berlin - Germany To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: Three Terabyte
On Thu, 20 Mar 2003, Dirk-Willem van Gulik wrote: > Depends on what access patterns you have; is it mostly dormant > archiving; or lots of access, concurrent, sequential ? How safe does the > data need to be; and against what (hardware failure, accidental rm -rf). This would be for backup. Data on about 50 webservers would be backed up to it on a nightly basis. So performance wouldn't be important. > But check out the 3ware RAID card; I've had great luck with building NFS > servers with 8 or 16 disks as fairly dormant/archival style storage > depots. Thanks, I will. -- Maarten de Vries http://unsavoury.net/ To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: Three Terabyte
> Let's say I need 3Tb of cheap storage (preferably IDE disks) and I want it > controlled by a FreeBSD system; how (if at all possible) would I do set that > up in terms of hard- and software? Depends on what access patterns you have; is it mostly dormant archiving; or lots of access, concurrent, sequential ? How safe does the data need to be; and against what (hardware failure, accidental rm -rf). But check out the 3ware RAID card; I've had great luck with building NFS servers with 8 or 16 disks as fairly dormant/archival style storage depots. Dw To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message