ISA Memory hole at 15-16 MB
Hi! While I was going through the Linux Device Drivers book by Allessandro Rubini, I came to know that at the time of 286 computers, ISA memory was mapped between 15 and 16MB for RAM. Since at that time nobody had more than 1-2 MB of RAM, people had no problems accessing the ISA memory. But now, as everybody has around 64MB RAM, so when we access that memory between 15 and 16MB (the ISA memory hole), are we referring to physical RAM or to ISA card's memory? Where can I get more details on this? Thanks Regards, Abhishek Khaitan
RE: FAQ
Can;t we use bunzip2 instead of playing with tar? And after bunzip2, try tar -x kernel-2.2.16.tar ? -Original Message- From: James Manning [SMTP:[EMAIL PROTECTED]] Sent: Thursday, August 03, 2000 10:35 AM To: [EMAIL PROTECTED] Subject: Re: FAQ [Marc Mutz] 2.4. How do I apply the patch to a kernel that I just downloaded from ftp.kernel.org? Put the downloaded kernel in /usr/src. Change to this directory, and move any directory called linux to something else. Then, type tar -Ixvf kernel-2.2.16.tar.bz2, replacing kernel-2.2.16.tar.bz2 with your kernel. Then cd to /usr/src/linux, and run patch -p1 raid-2.2.16-A0. Then compile the kernel as usual. Your tar is too customized to be in a FAQ. there is no bzip2 standard in gnu tar, so let's be intelligent and avoid the issue by going with the .gz tarball as a recommendation. -z is standard. Also, none of the tarballs will start with "kernel-" but "linux-" anyway, so that needs fixing. Also, I'd add "/path/to/" before the raid in the patch command, since otherwise we'd need to tell them to move the patch over to that directory (pedantic, yes, but still) oh, and "move any directory called linux to something else" seems to miss the possibility of a symlink, where renaming the symlink would be kind of pointless. Whether tar would just kill the symlink at extract time anyway is worth a check. -- James Manning [EMAIL PROTECTED] GPG Key fingerprint = B913 2FBD 14A9 CE18 B2B7 9C8E A0BF B026 EEBB F6E4
can anyone mail me bonnie?
hi! can anyone mail me tgz of bonnie or the web site from where it is available? Thanks Regards, Abhishek -Original Message- From: Corin Hartland-Swann [SMTP:[EMAIL PROTECTED]] Sent: Thursday, July 27, 2000 7:53 AM To: Gregory Leblanc Cc: Holger Kiehl; [EMAIL PROTECTED] Subject: RE: Question on disk benchmark and fragmentation Gregory, On Thu, 27 Jul 2000, Gregory Leblanc wrote: 6) Use tiotest, NOT bonnie! Try multiple threads (I use 1, 2, 4, 8, 16, 32, 64, 128, 256 threads - this is perhaps excessive!) What size datasets are you using? I use 1G if I'm feeling like making absolutely sure it's fair, or else something like 256M if I'm trying to get it done quickly. Bonnie++ is still a good benchmark, although it stresses things differently. I haven't used bonnie++ actually... The maximum number of threads that you should need to (or probably even want to) run is between 2x and 3x the number of disks that you have installed. That should ensure that every drive is pulling 1 piece of data, and that there is another thread that is waiting for data while that one is being retrieved. I believe in seeing how the performance breaks down under extreme stress. With a threaded database like mysql (one of the primary uses for our RAID arrays) you could quite easily have numerous threads all trying to read and write from the array simultaneously. When I was comparing performance of RAID0+1 to RAID5 there was a big difference in how quickly (as per number of threads) they ground to a halt. Here's an example: ./tiobench.pl --size 256 --dir /mnt/md3/ --block 4096 --threads 1 --threads 2 --threads 4 --threads 16 --threads 32 --threads 64 --threads 128 --threads 256 Linux Kernel 2.2.14, RAID 0+1 Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%) - -- --- - -- -- /mnt/ 25640961 46.3288 25.6% 40.3105 47.2% 165.171 0.66% /mnt/ 25640962 35.3465 21.9% 39.5187 45.9% 193.171 0.67% /mnt/ 25640964 29.1810 18.0% 38.7580 45.0% 214.686 0.89% /mnt/ 256409616 26.9373 17.3% 36.5620 42.2% 220.682 0.93% /mnt/ 256409632 21.4527 24.1% 34.7506 40.0% 216.958 0.97% /mnt/ 256409664 12.7891 47.4% 31.7158 36.1% 202.744 1.05% /mnt/ 2564096 128 8.65209 80.6% 27.8459 31.2% 200.230 3.27% /mnt/ 2564096 256 5.41081 131.% 24.6386 27.3% 193.811 16.1% Linux Kernel 2.2.14 with Mika's read-balance patch, RAID 0+1 Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%) - -- --- - -- -- /mnt/ 25640961 46.6853 24.6% 38.2826 44.2% 176.209 0.39% /mnt/ 25640962 59.6558 40.3% 38.7603 43.6% 221.300 0.69% /mnt/ 25640964 60.6616 43.6% 38.2311 42.9% 263.113 0.89% /mnt/ 256409616 51.5140 37.6% 37.1443 42.1% 302.154 1.05% /mnt/ 256409632 47.0307 34.9% 35.1884 40.1% 329.017 1.33% /mnt/ 256409664 42.1452 33.2% 33.0139 37.3% 341.591 1.41% /mnt/ 2564096 128 27.4339 36.0% 30.8700 34.3% 332.434 1.53% /mnt/ 2564096 256 15.5834 76.4% 28.2604 31.1% 321.990 13.2% Linux Kernel 2.2.14 with Mika's read-balance patch, RAID 5 Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%) - -- --- - -- -- /mnt/ 25640961 67.5911 38.8% 24.3309 34.9% 167.331 0.41% /mnt/ 25640962 60.4156 49.0% 24.5966 37.1% 208.991 0.67% /mnt/ 25640964 46.5667 38.1% 24.4007 37.2% 247.676 0.90% /mnt/ 256409616 27.7189 32.6% 24.3155 37.5% 282.041 1.12% /mnt/ 256409632 14.4717 45.2% 23.9831 36.8% 301.291 1.32% /mnt/ 256409664 8.39616 82.4% 22.5777 34.1% 299.902 1.67% /mnt/ 2564096 128 6.77856 103.% 20.8036 30.6% 276.423 16.7% /mnt/ 2564096 256 6.14939 115.% 19.0964 27.6% 266.183 35.5% This shows the quite interesting result that (for reads) RAID-5 starts off with 1 thread out-performing RAID-0+1 (68 vs 47), drops to the same level with 2 threads (60 vs 60), and rapidly decreases thereafter, eg at 64 threads it's 8 vs 42. Of course, because of that slight hiccup, RAID-0+1 arrays will fail (recoverably, but still bring the machine down) with one faulty disk. So we had to go with RAID-5 anyway... Heh, I'm using it because it provides redundancy, the added speed from Mika's RAID 1 read balancing patch is just a perk... HTH, Yeah, maybe I was being slightly unrealistic. But the performance is still mighty nice... Regards, Corin /+-\ | Corin Hartland-Swann | Direct: +44 (0) 20 7544 4676| | Commerce Internet Ltd | Mobile: +44 (0) 79 5854 0027| | 22 Cavendish Buildings |Tel: +44 (0) 20 7491 2000| |
RE: Raid
Hi Dhinesh... In Raid 1, whatever disk u give as "raid disk 0" is used as primary to construct the second "raid disk 1". So, what u should do is that remove the failed disk, say /dev/sda1 and make the second working disk as "raid disk 0". and add the new disk to "raid disk 1", and run raidstart /dev/md0. device /dev/sdb1 raid-disk 0 device /dev/sdc1 raid-disk 1 reconstruction will begin and add this /dev/sdc1 into your raid 1. Regards, Abhishek -Original Message- From: Selvarajan, Dhinesh [SMTP:[EMAIL PROTECTED]] Sent: Monday, July 24, 2000 12:41 PM To: '[EMAIL PROTECTED]' Subject: Raid Hi, we are using Red Hat Linux 6.2 on Intel based machines.we are having two 18 GB harddrives and we did Raid 1( Mirroring) during Installation.suppose if I remove one of the disk,machine is booting well without any problem.My doubt is if one of the harddisk fails then how to add the new hardisk in the RAID 1 and reconstruct the array. what are configuration files i have to edit and execute command in order to reconstruct array in the new hardisk. The following is the /etc/raidtab file raiddev/dev/md0 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 #nr-spare-disks 0 device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 raiddev /dev/md1 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 #nr-spare-disks 0 device /dev/sda6 raid-disk 0 device /dev/sdb6 raid-disk 1 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 #nr-spare-disks 0 device /dev/sda7 raid-disk 0 device /dev/sdb7 raid-disk 1 raiddev /dev/md3 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 #nr-spare-disks 0 device /dev/sda8 raid-disk 0 device /dev/sdb8 raid-disk 1 I created the same partition in new hardisk and i used dd command to copy the files from root and boot partition. dd if=/dev/sda6 of=/dev/sdb6 in order to copy data from the hardisk to new hardisk partition /dev/sdb6 After that i executed the command mkraid raidtab -f --only-superblock when i try to run command there is no superblock option in the command.The system is not booting from the new hardisk. I used raidtools-0.90-6. can you give me a solution for this problem?.if you have any query please contact me My ph : 510-670-1710 ext:1252 Thanks Dhinesh Alladvantage.com
Installing drivers automatically
Hi! Can anyone tell me how to automatically install my driver at boot time? Thanks Regards, Abhishek Khaitan
RE: help: read-ahead not set: what is it???
can u send your raid configuration file(s)? maybe, I will be able to help then... -Original Message- From: Sandro Dentella [SMTP:[EMAIL PROTECTED]] Sent: Thursday, July 06, 2000 2:23 AM To: [EMAIL PROTECTED] Subject: help: read-ahead not set: what is it??? Hi, I'm trying to configure raid1 w/ 2 disks /dev/hda7 /dev/hda8 but I get: mkraid version 0.36.4 parsing configuration file mkraid: aborted cat /proc/mdstat: personalities: [1 linear] [2 raid0] [3raid 1] read_ahead not set md0 : inctive what does read_ahead mean (I modprobed raid1, is there something else I'm disregarding?), sorry, but I need a hint very soon (if possible ;-) sandro *:-) -- Sandro Dentella *:-) e-mail: [EMAIL PROTECTED] [EMAIL PROTECTED]
unersolved symbols during insmod
HI! I am writing a block "driver module" in linux... While compiliing the module, do I have to use any switches with "cc"? I am getting "unresolved symbols" errors when I try to "insmod" my driver... The symbols (functions) are defined as extern in asm/bitops.h... Thanks, Abhishek