Re: [SLUG] Debian Linux Release.
Debian Linux Release. This sounds like a good talk for Friday Week ? http://kmuto.jp/debian/hcl/index.rhtmlx the only Debian page I use on Gentoo systems. Brett Coady From: slug-requ...@slug.org.au slug-requ...@slug.org.au To: slug@slug.org.au Sent: Monday, 19 August 2013 12:00 PM Subject: slug Digest, Vol 91, Issue 4 - Forwarded Message - Send slug mailing list submissions to slug@slug.org.au To subscribe or unsubscribe via the World Wide Web, visit http://lists.slug.org.au/listinfo/slug or, via email, send a message with subject or body 'help' to slug-requ...@slug.org.au You can reach the person managing the list at slug-ow...@slug.org.au When replying, please edit your Subject line so it is more specific than Re: Contents of slug digest... Today's Topics: 1. Re: Happy 20th birthday Debian! (Martin Visser) The announcements posts are always nostalgic (From https://groups.google.com/forum/#!msg/comp.os.linux.development/Md3Modzg5TU/xty88y5OLaMJ ) :- Fellow Linuxers, This is just to announce the imminent completion of a brand-new Linux release, which I'm calling the Debian Linux Release. This is a release that I have put together basically from scratch; in other words, I didn't simply make some changes to SLS and call it a new release. I was inspired to put together this release after running SLS and generally being dissatisfied with much of it, and after much altering of SLS I decided that it would be easier to start from scratch. The base system is now virtually complete (though I'm still looking around to make sure that I grabbed the most recent sources for everything), and I'd like to get some feedback before I add the fancy stuff. Please note that this release is not yet completed and may not be for several more weeks; however, I thought I'd post now to perhaps draw a few people out of the woodwork. Specifically, I'm looking for: 1) someone who will eventually be willing to allow me to upload the release to their anonymous ftp-site. Please contact me. Be warned that it will be rather large :) 2) comments, suggestions, advice, etc. from the Linux community. This is your chance to suggest specific packages, series, or anything you'd like to see part of the final release. Don't assume that because a package is in SLS that it will necessarily be included in the Debian release! Things like ls and cat are a given, but if there's anything that's in SLS that you couldn't live without please let me know! I'd also like suggestions for specific features for the release. For example, a friend of mine here suggested that undesired packages should be selected BEFORE the installation procedure begins so the installer doesn't have to babysit the installation. Suggestions along that line are also welcomed. What will make this release better than SLS? This: 1) Debian will be sleeker and slimmer. No more multiple binaries and manpages. 2) Debian will contain the most up-to-date of everything. The system will be easy to keep up-to-date with a 'upgrading' script in the base system which will allow complete integration of upgrade packages. 3) Debian will contain a installation procedure that doesn't need to be babysat; simply install the basedisk, copy the distribution disks to the harddrive, answer some question about what packages you want or don't want installed, and let the machine install the release while you do more interesting things. 4) Debian will contain a system setup procedure that will attempt to setup and configure everything from fstab to Xconfig. 5) Debian will contain a menu system that WORKS... menu-driven package installation and upgrading utility, menu-driven system setup, menu-driven help system, and menu-driven system administration. 6) Debian will make Linux easier for users who don't have access to the Internet. Currently, users are stuck with whatever comes with SLS. Non-Internet users will have the option of receiving periodic upgrade packages to apply to their system. They will also have the option of selecting from a huge library of additional packages that will not be included in the base system. This library will contain packages like the S3 X-server, nethack and Seyon; basically packages that you and I can ftp but non-netters cannot access. 7) Debian will be extensively documented (more than just a few READMEs). 8) As I put together Debian, I am keeping a meticulous record of where I got
Re:[SLUG] USB wifi N adapter? With benchmarks?
I cannot speak for USB with N wifi but do run an ath9k PCI card as a Hostap server with N speeds, although not 5 GHz. It runs fine as WPA2 CCMP. It is getting more features as it matures.(I have run this setup for 2 years now) I also have an Intel 4965AGN mini PCI card in a netbook that only works to 150 Mbps(be careful of this some cards only do 150 not 300), it has worked well on 5 GHz at Google cannot remember the exact speed, sorry. On a Ralink RT2860 I have seen 90 Mbps iperf through concrete to the ath9k 2.4 GHz Access Point I mentioned above on Ralinks own proprietary drivers. Last time I tried with the rt2x00 or Testing drivers I had issues. A good starting point might be : http://wireless.kernel.org/en/users/Drivers good luck Brett -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
Re: [SLUG] Serial write problem
#yiv1721350468 #yiv915510959 p, #yiv1721350468 #yiv915510959 li {white-space:pre-wrap;} On Saturday 22 May 2010 20:46:49 Jim Donovan wrote: I have been working with a single-board computer (TS-7250, using the built-in linux) which, about three times per second, sends 8-byte messages out through COM2 to another device. Very occasionally (it can go 20 hours without failing) a message doesn't all get transmitted. Only 7 of the eight bytes get sent. On these occasions, the status returned by write(2) is Resource temporarily unavailable. It seems reasonable to try another write(2) to transmit the eighth byte. However, it crashes without returning. We tried with COM1 and the same thing happened. This is illogical - we are not using handshaking and the UART has no way of knowing what is going on at the other end of the line This may be quite logical if the UART's FIFO or Buffer get flooded and it is returning a message back to the CPU. You don't tell us what Baud rate your running at either ? or even whether there is anything there. I have dumped termios and the control registers immediately before the crash; no corruption or other abnormality is evident. Before I try a different linux (cut-down debian Potato is available), does anyone have any simpler suggestions? Tried changing Baud rates ? Thanks, Jim Donovan regards Brett Coady -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
Re: [SLUG] RAID and LVM
From: Tony Sceats tony.sce...@gmail.com To: slug@slug.org.au Slower, though ... is a bit of a strange claim. Not because it is false, but because the answer is complex: you can, for example, double read speed and halve write speed, using a two disk RAID 1 array ... in the ideal case. I must say I'm curious about this, because I have always assumed that for a RAID 1 the write speed would be roughly the same as a single disk, not halved.. my reasoning being that both writes would occur in parallel, as with the reads.. the difference of course is that the 2 reads in parallel each transfer half the data, but the 2 writes transfers all the data each sure, you may have a little bit of overhead - issuing 2 IO instructions instead of 1, or in the case of a setup where both disks share the same bus (which is not the ideal setup) there would be contention on this bus, but halved? Is it really the case? If this is true, I guess the reason would be that the same data travels over the same bus twice before the operation can be said to be completed, therefore halving your write speed. But then this holds true for the read as well, so that despite issuing an instruction to 2 different disks, each with half the data requested, then you will meet the same contention and the data will get to you with the same speed as 1 disk.. so, if this is right, then RAID 1 compared to a single disk would be something like 1. 2 disks on 2 buses = (approx) half read time, same write time 2. 2 disks on 1 bus = (approx) same read time, double write time I honestly don't know if this is the case or not, I've certaintly never measured it and it may be implementation specific, but if not I'd really like to be shown where this is wrong.. I am inclined to think for raid1: 1. 2 disks on 2 buses = (approx) same read time, same write time 2. 2 disks on 1 bus = (approx) double read time, double write time and for raid0: 1. 2 disks on 2 buses = (approx) half read time, half write time 2. 2 disks on 1 bus = slightly better than same read time and write time The reason I state the above is that I did see a benchmark for one of those SIL680 PCI cards (Dual IDE Channel), most raid0 gain was having 2 individual drives on the individual IDE buss' They did put 4 IDE drive on 2 IDE buss and you got more gain but not as much as 2 drives on their own buss , all compared to 1 drive on one buss of course. I use a Kernel raid setup with 2 disks (Samsung 500GB), raid1 for /boot and raid0 for / and backup with dd to another drive every other week. This is just a desktop nothing too important, Raid 5 seems all the go from what I have read but I do not have the setup or time to look into it.. My raid1 /boot /dev/md1: Timing cached reads: 7612 MB in 2.00 seconds = 3810.62 MB/sec Timing buffered disk reads: 244 MB in 3.01 seconds = 81.09 MB/sec 1 drive from the raid1 array above: /dev/sda1: Timing cached reads: 7490 MB in 2.00 seconds = 3749.14 MB/sec Timing buffered disk reads: 248 MB in 3.02 seconds = 82.01 MB/sec My raid0 / /dev/md3: Timing cached reads: 7770 MB in 2.00 seconds = 3889.21 MB/sec Timing buffered disk reads: 486 MB in 3.00 seconds = 161.79 MB/sec Some guy had a new WD drive and got 100MB/sec from a single drive so expect 200 MB/sec from 2 (Sata2) ... haven't seen a Sata3 drive yet. 1 drive from the raid0 md3 array above: /dev/sda3: Timing cached reads: 7612 MB in 2.00 seconds = 3810.57 MB/sec Timing buffered disk reads: 256 MB in 3.01 seconds = 84.99 MB/sec These read-times were all done with: hdparm -tT /dev/(device) Anyone know a good non destructive write test for benchmarking HDD ? Hope this helps Brett -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
Re: [SLUG] RAID and LVM
- Original Message From: Dave Kempe d...@sol1.com.au To: Brett Coady bc196...@yahoo.com.au Cc: slug@slug.org.au Sent: Sat, 20 February, 2010 8:10:00 AM Subject: Re: [SLUG] RAID and LVM - Brett Coady wrote: These read-times were all done with: hdparm -tT /dev/(device) Anyone know a good non destructive write test for benchmarking HDD ? Hope this helps Brett Bonnie++ is what we use. You can use IOzone if you like complex charts. hdparm isn't great for this sort of thing. The trouble with Bonnie and IOzone is that they test the filesystem , for my example I was looking at the raw disk speed for raid results. Also, back to the one of the other suggestions - don't use RAID5. If you only have 3 disks, use RAID1 and 1 hotspare. They don't make disks like they used to, and RAID5 and MTBF stats meant that the chance of having a failure during a rebuild is too high. RAID5 is dead to me. RAID10 if you have enough disks, or RAID1 when you don't. and use LVM. It makes for growing/shrinking/chopping much easier. You can't shrink XFS. Be careful shrinking ext3. http://hardware.slashdot.org/hardware/08/10/21/2126252.shtml and don't backup to tape. Buy lots of harddrives, and expect to buy more of them. Dave That's handy to know about RAID5, thanks There are 2 other things I can suggest. 1: I use those cheap Caddies you get from the markets and noticed upon installing 2 Hard Disk in 2 of them they vibrated as they are both parallel.(well they are cheap and I only use them for backup) Upon installing my 2 Disk for RAID without caddies, I staggered them so the center axis is offset (trying to destroy some inertial cavitation here). If you are real keen you may like to mount one of the drive upside down on the same axis counter-acting inertial vibrations. (Going a bit too far ? see below.) There was a RAID array with 20+ drives stacked one on top one another, and when turned on the system tipped over due to inertia. If you use Gentoo this is known as Larry the Cow Tipping and like normal Cow Tipping is frowned upon. (yes, I know in the BIOS they have staggered Drive start for some systems, Larry just tips slower). 2: I had an issue when setting up RAID that I always lost 1MB or 2 from one of the HDD, this concerned me and every time I repartitioned and rebooted I lost it again, the second drive was fine.? After giving up and leaving the last 2 MB on both drives untouched to keep them the same size I was happy. (sort of) I have since that learnt that some Motherboard backup the BIOS to the end HDD and have a setting to do so. My motherboard has no option for this and has dual BIOS anyway but it still appears to do it! Regards Brett -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
Re: [SLUG] RAID and LVM
From: David Kempe d...@sol1.com.au To: Brett Coady bc196...@yahoo.com.au Cc: slug@slug.org.au slug@slug.org.au Sent: Sat, 20 February, 2010 12:20:43 PM Subject: Re: [SLUG] RAID and LVM On 20/02/2010, at 11:53 AM, Brett Coady bc196...@yahoo.com.au wrote: The trouble with Bonnie and IOzone is that they test the filesystem , for my example I was looking at the raw disk speed for raid results. Well if you keep the kernel and filesystem identical between runs you end up benchmarking the hardware. Still useful for the purpose of comparision. That is true , however the original poster was asking about RAID and partitioning , He hadn't mention File-systems and that's is why I did the benchmarking the way I did to try and shed some light on Raid gains/loses and speed. I have since that learnt that some Motherboard backup the BIOS to the end HDD and have a setting to do so. My motherboard has no option for this and has dual BIOS anyway but it still appears to do it! The difference in drive size is not the dual bios at all. And leaving a buffer at the end of drives when using software raid is a good idea cos diffrences in drive geometry will always happen. Are you sure about this, I spent quite a bit of time trying to work out where my missing 1MB had gone? On initial Boot the Drives had exactly the same size on both. Also to try and test the problem I actually swapped the drives over and guess what, I lost 1 MB from the opposite drive! (yeah I checked the electronic serial numbers) Ahhh, Upon further investigation I find something interesting and I am not alone! http://en.wikipedia.org/wiki/Host_protected_area An application called Sleuthkit tells me more. This is handy to know for someone setting RAID up, they even give examples on how to remove stubborn HPA regards Brett -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html