Re: Unable to mount the SD card formatted using the DIGITAL CAMREA on Linux box
What camera is this? On Friday 29 July 2005 13:23, Srinivas G. wrote: > Dear All, > > We have developed a Block Device Driver to handle the flash media > devices in Linux 2.6.x kernel. It is working fine. We are able to mount > the SD cards that are formatted on Windows systems, but we unable mount > the cards that are formatted using the DIGITAL CAMERA. > > We have found one thing that the Windows and Digital Camera both are > formatting the SD cards in FAT12 only. But why we are not able to mount > the SD cards on Linux Box that are formatted using the Digital Camera. > > Could any one explain the problem? It is great help to us. > Thanks in advance and we are looking forward a POSITIVE reply. > > Thanks and Regards, > Srinivas G > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Unable to mount the SD card formatted using the DIGITAL CAMREA on Linux box
What camera is this? On Friday 29 July 2005 13:23, Srinivas G. wrote: Dear All, We have developed a Block Device Driver to handle the flash media devices in Linux 2.6.x kernel. It is working fine. We are able to mount the SD cards that are formatted on Windows systems, but we unable mount the cards that are formatted using the DIGITAL CAMERA. We have found one thing that the Windows and Digital Camera both are formatting the SD cards in FAT12 only. But why we are not able to mount the SD cards on Linux Box that are formatted using the Digital Camera. Could any one explain the problem? It is great help to us. Thanks in advance and we are looking forward a POSITIVE reply. Thanks and Regards, Srinivas G - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Sandisk Compact Flash
So then you will have to reboot sometimes. BTW, IDE can be hot swapped if you are carefull: Umount the device, unplug it. Plug in the same device (same model) Remount. A bit risky to your hardware, but I have used this way for harddisks several time when a system has to keep running. Never used it for compact flash though. Another option: Buy an USB card controller or a pcmcia controller for in your pc. Costs about $10,- Op zaterdag 16 juli 2005 18:31, schreef u: > On Sat, Jul 16, 2005 at 03:04:54AM -0400, Michael Krufky wrote: > > I recommend picking up a CF-to-IDE adapter, such as this: > > > > http://www.acscontrol.com/Index_ACS.asp?Page=/Pages/Products/CompactFlash > >/IDE_To_CF_Adapter.htm > > That's fine if you have a spare IDE port, but unlikely if you're > using a laptop. Also these do not support hot swapping. > > -- Dave > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Sandisk Compact Flash
So then you will have to reboot sometimes. BTW, IDE can be hot swapped if you are carefull: Umount the device, unplug it. Plug in the same device (same model) Remount. A bit risky to your hardware, but I have used this way for harddisks several time when a system has to keep running. Never used it for compact flash though. Another option: Buy an USB card controller or a pcmcia controller for in your pc. Costs about $10,- Op zaterdag 16 juli 2005 18:31, schreef u: On Sat, Jul 16, 2005 at 03:04:54AM -0400, Michael Krufky wrote: I recommend picking up a CF-to-IDE adapter, such as this: http://www.acscontrol.com/Index_ACS.asp?Page=/Pages/Products/CompactFlash /IDE_To_CF_Adapter.htm That's fine if you have a spare IDE port, but unlikely if you're using a laptop. Also these do not support hot swapping. -- Dave - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Megaraid + reiser problem
] <6>subfs 0.9 Kernel logging (ksyslog) stopped. Kernel log daemon terminating. Activating swap-devices in /etc/fstab... doneChecking root file system... fsck 1.34 (25-Jul-2003) Reiserfs super block in block 16 on 0x802 of format 3.6 with standard journal Blocks (total/free): 4194960/2841624 by 4096 bytes Filesystem is NOT clean Filesystem seems mounted read-only. Skipping journal replay. Checking internal tree..finished doneexit status of (boot.rootfsck) is (0) run boot scripts (boot.md boot.device-mapper) Activating device mapper... Creating /dev/mapper/control character device with major:10 minor:63. done exit status of (boot.md boot.device-mapper) is (0 0) run boot scripts (boot.localfs) Checking file systems... fsck 1.34 (25-Jul-2003) doneSetting updone Mounting local file systems... proc on /proc type proc (rw) tmpfs on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) /dev/hda on /media/cdrecorder type subfs (ro,nosuid,nodev,fs=cdfss,procuid,iocharset=utf8) dmesg output with /dev/sda3 mounted at boottime: <6>Adding 1052216k swap on /dev/sda1. Priority:42 extents:1 <5>ReiserFS: sda2: Removing [209 121367 0x0 SD]..done <5>ReiserFS: sda2: Removing [209 121358 0x0 SD]..done <5>ReiserFS: sda2: Removing [209 121158 0x0 SD]..done <5>ReiserFS: sda2: There were 3 uncompleted unlinks/truncates. Completed <6>md: Autodetecting RAID arrays. <6>md: autorun ... <6>md: ... autorun DONE. <6>device-mapper: 4.3.0-ioctl (2004-09-30) initialised: [EMAIL PROTECTED] <5>ReiserFS: sda3: found reiserfs format "3.6" with standard journal <5>ReiserFS: sda3: using ordered data mode <4>reiserfs: using flush barriers <4>ReiserFS: sda3: warning: sh-461: journal_init: wrong transaction max size (4294967295). Changed to 1024 <5>ReiserFS: sda3: journal params: device sda3, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 2, max trans age 30 <5>ReiserFS: sda3: checking transaction log (sda3) <5>ReiserFS: sda3: replayed 8 transactions in 1 seconds <4>reiserfs: disabling flush barriers on sda3 <5>ReiserFS: sda3: Using r5 hash to sort names <6>subfs 0.9 I can not find anymore info without risking a crash of /dev/sda2 too, and since the backup machine is lying around to be upgraded, I can not risk too much fault detection. Best regards, Norbert van Nobelen www.edusupport.nl (with a bit of luck, if the server stays alive) -- http://www.edusupport.nl;>EduSupport: Linux Desktop for schools and small to medium business in The Netherlands and Belgium - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Megaraid + reiser problem
. Checking internal tree..finished donenoticeexit status of (boot.rootfsck) is (0) noticerun boot scripts (boot.md boot.device-mapper) Activating device mapper... Creating /dev/mapper/control character device with major:10 minor:63. done noticeexit status of (boot.md boot.device-mapper) is (0 0) noticerun boot scripts (boot.localfs) Checking file systems... fsck 1.34 (25-Jul-2003) doneSetting updone Mounting local file systems... proc on /proc type proc (rw) tmpfs on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) /dev/hda on /media/cdrecorder type subfs (ro,nosuid,nodev,fs=cdfss,procuid,iocharset=utf8) dmesg output with /dev/sda3 mounted at boottime: 6Adding 1052216k swap on /dev/sda1. Priority:42 extents:1 5ReiserFS: sda2: Removing [209 121367 0x0 SD]..done 5ReiserFS: sda2: Removing [209 121358 0x0 SD]..done 5ReiserFS: sda2: Removing [209 121158 0x0 SD]..done 5ReiserFS: sda2: There were 3 uncompleted unlinks/truncates. Completed 6md: Autodetecting RAID arrays. 6md: autorun ... 6md: ... autorun DONE. 6device-mapper: 4.3.0-ioctl (2004-09-30) initialised: [EMAIL PROTECTED] 5ReiserFS: sda3: found reiserfs format 3.6 with standard journal 5ReiserFS: sda3: using ordered data mode 4reiserfs: using flush barriers 4ReiserFS: sda3: warning: sh-461: journal_init: wrong transaction max size (4294967295). Changed to 1024 5ReiserFS: sda3: journal params: device sda3, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 2, max trans age 30 5ReiserFS: sda3: checking transaction log (sda3) 5ReiserFS: sda3: replayed 8 transactions in 1 seconds 4reiserfs: disabling flush barriers on sda3 5ReiserFS: sda3: Using r5 hash to sort names 6subfs 0.9 I can not find anymore info without risking a crash of /dev/sda2 too, and since the backup machine is lying around to be upgraded, I can not risk too much fault detection. Best regards, Norbert van Nobelen www.edusupport.nl (with a bit of luck, if the server stays alive) -- a href=http://www.edusupport.nl;EduSupport: Linux Desktop for schools and small to medium business in The Netherlands and Belgium/a - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Proposal for new (meta) filesystem
Dear fellow Linux-kernel maillist users, I want to present a new (meta) filesystem proposal to you, to consider. The filesystem is supposed to be optimized for useage of business filesystems, mailservers, and other filesystems with an above average amount of repetitive data. Features: - Depending on the implementation on block level or on file level (as meta system): Copy on change of the block or the file. - Meta database with file features like name, permissions, and checksum per block. - The files itself will not be visual to the users, and will not have an owner (not even nobody). The files will only be accessible through references to the files. - The changes are recorded as change on the previous file ala LVM methods but then per user or group. The changes do not mean that there is an old version which can be accessed, for the user accessing that file, there is just one version. - The changes are kept in a seperate part of the filesystem, which is a reserved percentage of the diskspace, but which can be stretched by the filesystem of there are more changes then expected. The changes are kept in a database like system. - There is only one change record per file per user/group combination. So in case of copy of an already changed file, the file will be made permanent, and the copy will get a reference to this new permanent file. - Deletion of the source file will be virtual as long as it has references to it. Files with different names, dates etc, but with the same binary content will be stored only once this way. From the outside nobody will see that this happens, backup programs will make a normal backup. du should report the virtual used space while df should report the virtual used space, the optimized use and the real free blocks. Any more ideas/comments? Norbert - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Proposal for new (meta) filesystem
Dear fellow Linux-kernel maillist users, I want to present a new (meta) filesystem proposal to you, to consider. The filesystem is supposed to be optimized for useage of business filesystems, mailservers, and other filesystems with an above average amount of repetitive data. Features: - Depending on the implementation on block level or on file level (as meta system): Copy on change of the block or the file. - Meta database with file features like name, permissions, and checksum per block. - The files itself will not be visual to the users, and will not have an owner (not even nobody). The files will only be accessible through references to the files. - The changes are recorded as change on the previous file ala LVM methods but then per user or group. The changes do not mean that there is an old version which can be accessed, for the user accessing that file, there is just one version. - The changes are kept in a seperate part of the filesystem, which is a reserved percentage of the diskspace, but which can be stretched by the filesystem of there are more changes then expected. The changes are kept in a database like system. - There is only one change record per file per user/group combination. So in case of copy of an already changed file, the file will be made permanent, and the copy will get a reference to this new permanent file. - Deletion of the source file will be virtual as long as it has references to it. Files with different names, dates etc, but with the same binary content will be stored only once this way. From the outside nobody will see that this happens, backup programs will make a normal backup. du should report the virtual used space while df should report the virtual used space, the optimized use and the real free blocks. Any more ideas/comments? Norbert - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [SOLVED] Linux 2.6.10-rc3-bk15 hanged under high load (i386)
I have the same broken box here: Graphics freezes, sometime the whole box: It needs to warm up. Once warmed up, it will keep running stable forever (-: (Ok, the forever claim can not be verified). On Sunday 20 February 2005 11:39, Jose Luis Domingo Lopez wrote: > Hi: > > This is more an attempt to get this indexed by web search engines than a > request for help, although maybe someone can draw some conclusion from the > following and be of some use to anyone. Although the subject refers to > some specific kernel version, what is reported in the mail is also valid > for (at least) any kernel version up to 2.6.11-rc3. > > Back in the beginning of January I sent a (desperate) email to the list > reporting frequenet lockups with (then) recent 2.6.x kernel versions. The > Message-ID of the original post is [EMAIL PROTECTED], and > the subject "Linux 2.6.10-rc3-bk15 hanged under high load (i386)". > > Well, I heve suffering from the same problem since then, but soon I > realized some kind of pattern: I was only getting one (and only one) > kernel hang each day. I power on the PC, start downloading tons of mail > (mostly spam), at some point spamassassin gets killed and the logs get > full of very nasty messages as the ones reported back in January (" Unable > to handle kernel paging request" and stack dumps). Finally the box freezes. > > After the above I reboot the box (a simple RESET, not a power OFF / power > ON cycle), start downloading mail again...and no more paging request > failures, stack dumps, process killed, or hung boxes, _never_. > > So it seemed like the hardware is flawed and somehow needed to be rebooted > once to be put in a "stable" configuration. From the day I realized the > pattern I booted the PC, then inmediately did a "reboot", and then started > using the box as usual: no more error or problems of any kind since then. > > It's been two weeks since then, and no problems. Maybe it was just a > coincidence, so today I booted the box and started using it without an > "initializing reboot" like the previous two weeks. And some minutes after > started downloading mail the box freezed hard with the messages inlined to > the end of this email. > > So it seems clear to me that something is very broken in the hardware, but > somehow it gets fixed after a reboot. I have no knowledge to investigate > this further, but at least someone with the same problem will search > through Google and hopefully find this message. > > Greetings. > > > Feb 20 11:16:30 dardhal kernel: Unable to handle kernel paging request at > virtual address 0016a51c Feb 20 11:16:30 dardhal kernel: printing eip: > Feb 20 11:16:30 dardhal kernel: c01320b7 > Feb 20 11:16:30 dardhal kernel: *pde = > Feb 20 11:16:30 dardhal kernel: Oops: 0002 [#1] > Feb 20 11:16:30 dardhal kernel: Modules linked in: sch_htb cls_u32 > sch_ingress ipt_LOG ipt_limit ipt_state iptable_filter ipt_MASQUERADE > iptable_nat ip_conntrack ip_tables ppp_deflate bsd_comp ppp_async crc_ccitt > ppp_generic slhc deflate zlib_deflate zlib_inflate twofish serpent aes_i586 > blowfish des sha256 sha1 crypto_null af_key md5 ipv6 snd_via82xx uhci_hcd > usbcore i2c_viapro tuner tvaudio bttv video_buf v4l2_common btcx_risc > tveeprom videodev snd_ymfpci snd_ac97_codec snd_pcm_oss snd_mixer_oss > snd_pcm snd_opl3_lib snd_timer snd_hwdep snd_page_alloc snd_mpu401_uart > snd_rawmidi snd_seq_device snd soundcore skystar2 dvb_core mt352 stv0299 > nxt2002 firmware_class mt312 8139too 8139cp mii via_agp agpgart reiserfs > xfs exportfs dm_mod it87 i2c_sensor i2c_isa rtc Feb 20 11:16:30 dardhal > kernel: CPU:0 > Feb 20 11:16:30 dardhal kernel: EIP:0060:[]Not tainted > VLI Feb 20 11:16:30 dardhal kernel: EFLAGS: 00010006 (2.6.11-rc3) > Feb 20 11:16:30 dardhal kernel: EIP is at buffered_rmqueue+0x57/0x1a0 > Feb 20 11:16:30 dardhal kernel: eax: c10ef5f8 ebx: c0309224 ecx: > c0309250 edx: 0016a518 Feb 20 11:16:30 dardhal kernel: esi: 0246 > edi: c0309240 ebp: c0309224 esp: c0b69df4 Feb 20 11:16:30 dardhal > kernel: ds: 007b es: 007b ss: 0068 > Feb 20 11:16:30 dardhal kernel: Process spamassassin (pid: 5554, > threadinfo=c0b68000 task=cdd065d0) Feb 20 11:16:30 dardhal kernel: Stack: > c77ae000 0010 c0309250 c10ef5e0 c0309224 cd5a07dc Feb > 20 11:16:30 dardhal kernel: ce900b1c c01326c3 c0309224 80d2 > 0001 Feb 20 11:16:30 dardhal kernel: 0001 > cdd065d0 0010 c030948c 80d2 c0b69e74 Feb 20 > 11:16:30 dardhal kernel: Call Trace: > Feb 20 11:16:30 dardhal kernel: [] __alloc_pages+0x423/0x450 > Feb 20 11:16:30 dardhal kernel: [] do_anonymous_page+0x71/0x130 > Feb 20 11:16:30 dardhal kernel: [] do_no_page+0x63/0x2b0 > Feb 20 11:16:30 dardhal kernel: [] handle_mm_fault+0xde/0x150 > Feb 20 11:16:30 dardhal kernel: [] do_page_fault+0x18c/0x599 > Feb 20 11:16:30 dardhal kernel: [] i8042_interrupt+0x116/0x2f0 > Feb 20 11:16:30 dardhal kernel: []
Re: Yenta TI: ... no PCI interrupts. Fish. Please report.
Have you tried it to get it to work without ACPI in the kernel at all, and start from there? Best regards, Norbert On Saturday 19 February 2005 22:54, you wrote: > Hi everyone, > > I've been banging my head on this one a couple of days with no luck. > > I have a IBM Thinkpad G41 with a pentium4M with Hyperthreading. I can't > get the PCMCIA working at all. I've tried turning off hyperthreading, > I've tried with and without preempt, I've even added pci=noacpi. I've > added Len's ACPI patches, but nothing works. > > Here's lspci -vvv: > > :00:00.0 Host bridge: Intel Corp. 82852/855GM Host Bridge (rev 02) > Subsystem: IBM: Unknown device 0579 > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- > Stepping- SERR+ FastB2B- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- > DEVSEL=fast >TAbort- SERR- Latency: 0 > Region 0: Memory at d000 (32-bit, prefetchable) [size=256M] > Capabilities: > > :00:00.1 System peripheral: Intel Corp. 855GM/GME GMCH Memory I/O > Control Registers (rev 02) Subsystem: IBM: Unknown device 057a > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- > Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- > DEVSEL=fast >TAbort- SERR- Latency: 0 > > :00:00.3 System peripheral: Intel Corp. 855GM/GME GMCH Configuration > Process Registers (rev 02) Subsystem: IBM: Unknown device 057b > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- > Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- > DEVSEL=fast >TAbort- SERR- Latency: 0 > > :00:01.0 PCI bridge: Intel Corp. 855GME GMCH Host-to-AGP Bridge > (Virtual PCI-to-PCI) (rev 02) (prog-if 00 [Normal decode]) Control: I/O+ > Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ > FastB2B- Status: Cap- 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- > SERR- Latency: 96 > Bus: primary=00, secondary=01, subordinate=01, sec-latency=64 > I/O behind bridge: f000-0fff > Memory behind bridge: c100-c1ff > Prefetchable memory behind bridge: e000-efff > BridgeCtl: Parity- SERR- NoISA+ VGA+ MAbort- >Reset- FastB2B- > > > [ ... USB controllers snipped out ... ] > > :00:1e.0 PCI bridge: Intel Corp. 82801 PCI Bridge (rev 81) (prog-if 00 > [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- > VGASnoop- ParErr- Stepping- SERR+ FastB2B- Status: Cap- 66MHz- UDF- > FastB2B+ ParErr- DEVSEL=fast >TAbort- SERR- Latency: 0 > Bus: primary=00, secondary=02, subordinate=08, sec-latency=168 > I/O behind bridge: 3000-6fff > Memory behind bridge: c200-cfff > Prefetchable memory behind bridge: f000-f7ff > BridgeCtl: Parity- SERR- NoISA+ VGA- MAbort- >Reset- FastB2B- > > :00:1f.0 ISA bridge: Intel Corp. 82801DBM LPC Interface Controller (rev > 01) Control: I/O+ Mem+ BusMaster+ SpecCycle+ MemWINV- VGASnoop- ParErr- > Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- > DEVSEL=medium >TAbort- SERR- > [ ... snipped out IDE Bridge controllers too ... ] > > :00:1f.3 SMBus: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus > Controller (rev 01) Subsystem: IBM: Unknown device 0547 > Control: I/O+ Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- > Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- > DEVSEL=medium >TAbort- SERR- routed to IRQ 11 > Region 4: I/O ports at 1880 [size=32] > > [ ... snipped audio VGA NVidia and Ethernet controllers ... ] > > :02:01.0 CardBus bridge: Texas Instruments PCI1520 PC card Cardbus > Controller (rev 01) Subsystem: IBM ThinkPad T30/T40 > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- > Stepping- SERR- FastB2B- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- > DEVSEL=medium >TAbort- SERR- Line Size: 0x20 (128 bytes) > Interrupt: pin A routed to IRQ 177 > Region 0: Memory at 3fefb000 (32-bit, non-prefetchable) [size=4K] > Bus: primary=02, secondary=03, subordinate=06, sec-latency=176 > Memory window 0: 4000-403ff000 (prefetchable) > Memory window 1: 4040-407ff000 > I/O window 0: 4000-40ff > I/O window 1: 4400-44ff > BridgeCtl: Parity- SERR- ISA- VGA- MAbort- >Reset+ 16bInt+ > PostWrite+ 16-bit legacy interface ports at 0001 > > :02:01.1 CardBus bridge: Texas Instruments PCI1520 PC card Cardbus > Controller (rev 01) Subsystem: IBM ThinkPad T30/T40 > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- > Stepping- SERR- FastB2B- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- > DEVSEL=medium >TAbort- SERR- Line Size: 0x20 (128 bytes) > Interrupt: pin B routed to IRQ 185 > Region 0: Memory at 3fefc000 (32-bit, non-prefetchable) [size=4K] > Bus: primary=02, secondary=07,
Re: Yenta TI: ... no PCI interrupts. Fish. Please report.
Have you tried it to get it to work without ACPI in the kernel at all, and start from there? Best regards, Norbert On Saturday 19 February 2005 22:54, you wrote: Hi everyone, I've been banging my head on this one a couple of days with no luck. I have a IBM Thinkpad G41 with a pentium4M with Hyperthreading. I can't get the PCMCIA working at all. I've tried turning off hyperthreading, I've tried with and without preempt, I've even added pci=noacpi. I've added Len's ACPI patches, but nothing works. Here's lspci -vvv: :00:00.0 Host bridge: Intel Corp. 82852/855GM Host Bridge (rev 02) Subsystem: IBM: Unknown device 0579 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast TAbort- TAbort- MAbort+ SERR- PERR- Latency: 0 Region 0: Memory at d000 (32-bit, prefetchable) [size=256M] Capabilities: available only to root :00:00.1 System peripheral: Intel Corp. 855GM/GME GMCH Memory I/O Control Registers (rev 02) Subsystem: IBM: Unknown device 057a Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast TAbort- TAbort- MAbort- SERR- PERR- Latency: 0 :00:00.3 System peripheral: Intel Corp. 855GM/GME GMCH Configuration Process Registers (rev 02) Subsystem: IBM: Unknown device 057b Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast TAbort- TAbort- MAbort- SERR- PERR- Latency: 0 :00:01.0 PCI bridge: Intel Corp. 855GME GMCH Host-to-AGP Bridge (Virtual PCI-to-PCI) (rev 02) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- Status: Cap- 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast TAbort- TAbort- MAbort- SERR- PERR- Latency: 96 Bus: primary=00, secondary=01, subordinate=01, sec-latency=64 I/O behind bridge: f000-0fff Memory behind bridge: c100-c1ff Prefetchable memory behind bridge: e000-efff BridgeCtl: Parity- SERR- NoISA+ VGA+ MAbort- Reset- FastB2B- [ ... USB controllers snipped out ... ] :00:1e.0 PCI bridge: Intel Corp. 82801 PCI Bridge (rev 81) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast TAbort- TAbort- MAbort- SERR- PERR+ Latency: 0 Bus: primary=00, secondary=02, subordinate=08, sec-latency=168 I/O behind bridge: 3000-6fff Memory behind bridge: c200-cfff Prefetchable memory behind bridge: f000-f7ff BridgeCtl: Parity- SERR- NoISA+ VGA- MAbort- Reset- FastB2B- :00:1f.0 ISA bridge: Intel Corp. 82801DBM LPC Interface Controller (rev 01) Control: I/O+ Mem+ BusMaster+ SpecCycle+ MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium TAbort- TAbort- MAbort- SERR- PERR- Latency: 0 [ ... snipped out IDE Bridge controllers too ... ] :00:1f.3 SMBus: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 01) Subsystem: IBM: Unknown device 0547 Control: I/O+ Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium TAbort- TAbort- MAbort- SERR- PERR- Interrupt: pin B routed to IRQ 11 Region 4: I/O ports at 1880 [size=32] [ ... snipped audio VGA NVidia and Ethernet controllers ... ] :02:01.0 CardBus bridge: Texas Instruments PCI1520 PC card Cardbus Controller (rev 01) Subsystem: IBM ThinkPad T30/T40 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium TAbort- TAbort- MAbort- SERR- PERR- Latency: 168, Cache Line Size: 0x20 (128 bytes) Interrupt: pin A routed to IRQ 177 Region 0: Memory at 3fefb000 (32-bit, non-prefetchable) [size=4K] Bus: primary=02, secondary=03, subordinate=06, sec-latency=176 Memory window 0: 4000-403ff000 (prefetchable) Memory window 1: 4040-407ff000 I/O window 0: 4000-40ff I/O window 1: 4400-44ff BridgeCtl: Parity- SERR- ISA- VGA- MAbort- Reset+ 16bInt+ PostWrite+ 16-bit legacy interface ports at 0001 :02:01.1 CardBus bridge: Texas Instruments PCI1520 PC card Cardbus Controller (rev 01) Subsystem: IBM ThinkPad T30/T40 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium TAbort- TAbort- MAbort- SERR- PERR- Latency: 168, Cache Line Size:
Re: [SOLVED] Linux 2.6.10-rc3-bk15 hanged under high load (i386)
I have the same broken box here: Graphics freezes, sometime the whole box: It needs to warm up. Once warmed up, it will keep running stable forever (-: (Ok, the forever claim can not be verified). On Sunday 20 February 2005 11:39, Jose Luis Domingo Lopez wrote: Hi: This is more an attempt to get this indexed by web search engines than a request for help, although maybe someone can draw some conclusion from the following and be of some use to anyone. Although the subject refers to some specific kernel version, what is reported in the mail is also valid for (at least) any kernel version up to 2.6.11-rc3. Back in the beginning of January I sent a (desperate) email to the list reporting frequenet lockups with (then) recent 2.6.x kernel versions. The Message-ID of the original post is [EMAIL PROTECTED], and the subject Linux 2.6.10-rc3-bk15 hanged under high load (i386). Well, I heve suffering from the same problem since then, but soon I realized some kind of pattern: I was only getting one (and only one) kernel hang each day. I power on the PC, start downloading tons of mail (mostly spam), at some point spamassassin gets killed and the logs get full of very nasty messages as the ones reported back in January ( Unable to handle kernel paging request and stack dumps). Finally the box freezes. After the above I reboot the box (a simple RESET, not a power OFF / power ON cycle), start downloading mail again...and no more paging request failures, stack dumps, process killed, or hung boxes, _never_. So it seemed like the hardware is flawed and somehow needed to be rebooted once to be put in a stable configuration. From the day I realized the pattern I booted the PC, then inmediately did a reboot, and then started using the box as usual: no more error or problems of any kind since then. It's been two weeks since then, and no problems. Maybe it was just a coincidence, so today I booted the box and started using it without an initializing reboot like the previous two weeks. And some minutes after started downloading mail the box freezed hard with the messages inlined to the end of this email. So it seems clear to me that something is very broken in the hardware, but somehow it gets fixed after a reboot. I have no knowledge to investigate this further, but at least someone with the same problem will search through Google and hopefully find this message. Greetings. Feb 20 11:16:30 dardhal kernel: Unable to handle kernel paging request at virtual address 0016a51c Feb 20 11:16:30 dardhal kernel: printing eip: Feb 20 11:16:30 dardhal kernel: c01320b7 Feb 20 11:16:30 dardhal kernel: *pde = Feb 20 11:16:30 dardhal kernel: Oops: 0002 [#1] Feb 20 11:16:30 dardhal kernel: Modules linked in: sch_htb cls_u32 sch_ingress ipt_LOG ipt_limit ipt_state iptable_filter ipt_MASQUERADE iptable_nat ip_conntrack ip_tables ppp_deflate bsd_comp ppp_async crc_ccitt ppp_generic slhc deflate zlib_deflate zlib_inflate twofish serpent aes_i586 blowfish des sha256 sha1 crypto_null af_key md5 ipv6 snd_via82xx uhci_hcd usbcore i2c_viapro tuner tvaudio bttv video_buf v4l2_common btcx_risc tveeprom videodev snd_ymfpci snd_ac97_codec snd_pcm_oss snd_mixer_oss snd_pcm snd_opl3_lib snd_timer snd_hwdep snd_page_alloc snd_mpu401_uart snd_rawmidi snd_seq_device snd soundcore skystar2 dvb_core mt352 stv0299 nxt2002 firmware_class mt312 8139too 8139cp mii via_agp agpgart reiserfs xfs exportfs dm_mod it87 i2c_sensor i2c_isa rtc Feb 20 11:16:30 dardhal kernel: CPU:0 Feb 20 11:16:30 dardhal kernel: EIP:0060:[c01320b7]Not tainted VLI Feb 20 11:16:30 dardhal kernel: EFLAGS: 00010006 (2.6.11-rc3) Feb 20 11:16:30 dardhal kernel: EIP is at buffered_rmqueue+0x57/0x1a0 Feb 20 11:16:30 dardhal kernel: eax: c10ef5f8 ebx: c0309224 ecx: c0309250 edx: 0016a518 Feb 20 11:16:30 dardhal kernel: esi: 0246 edi: c0309240 ebp: c0309224 esp: c0b69df4 Feb 20 11:16:30 dardhal kernel: ds: 007b es: 007b ss: 0068 Feb 20 11:16:30 dardhal kernel: Process spamassassin (pid: 5554, threadinfo=c0b68000 task=cdd065d0) Feb 20 11:16:30 dardhal kernel: Stack: c77ae000 0010 c0309250 c10ef5e0 c0309224 cd5a07dc Feb 20 11:16:30 dardhal kernel: ce900b1c c01326c3 c0309224 80d2 0001 Feb 20 11:16:30 dardhal kernel: 0001 cdd065d0 0010 c030948c 80d2 c0b69e74 Feb 20 11:16:30 dardhal kernel: Call Trace: Feb 20 11:16:30 dardhal kernel: [c01326c3] __alloc_pages+0x423/0x450 Feb 20 11:16:30 dardhal kernel: [c013c3e1] do_anonymous_page+0x71/0x130 Feb 20 11:16:30 dardhal kernel: [c013c503] do_no_page+0x63/0x2b0 Feb 20 11:16:30 dardhal kernel: [c013c92e] handle_mm_fault+0xde/0x150 Feb 20 11:16:30 dardhal kernel: [c010fd8c] do_page_fault+0x18c/0x599 Feb 20 11:16:30 dardhal kernel: [c0218516] i8042_interrupt+0x116/0x2f0 Feb 20 11:16:30 dardhal kernel: [c013ef2f] unmap_vma_list+0x1f/0x30 Feb 20 11:16:30
Re: negative diskspace usage
I think the 101% usage is the interesting point here You are using more diskspace than you have available. I missed the first mail though, so what filesystem is this and which kernel version? On Saturday 22 January 2005 11:09, Wichert Akkerman wrote: > Previously [EMAIL PROTECTED] wrote: > > Wichert Akkerman wrote: > > > After cleaning up a bit df suddenly showed interesting results: > > > > > > FilesystemSize Used Avail Use% Mounted on > > > /dev/md4 1019M -64Z 1.1G 101% /tmp > > > > > > Filesystem 1K-blocks Used Available Use% Mounted on > > > /dev/md4 1043168 -73786976294838127736 1068904 101% > > > /tmp > > > > It looks like Windows 95's FDISK > > command created the partitions. > > There is no way you can see that from the output I gave, and it is also > incorrect. > > > The partition boundaries still remain where Windows 95 put them, and > > you have overlapping partitions. > > fdisk does not create overlapping partitions. > > Wichert. -- http://www.edusupport.nl;>EduSupport: Linux Desktop for schools and small to medium business in The Netherlands and Belgium - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: negative diskspace usage
I think the 101% usage is the interesting point here You are using more diskspace than you have available. I missed the first mail though, so what filesystem is this and which kernel version? On Saturday 22 January 2005 11:09, Wichert Akkerman wrote: Previously [EMAIL PROTECTED] wrote: Wichert Akkerman wrote: After cleaning up a bit df suddenly showed interesting results: FilesystemSize Used Avail Use% Mounted on /dev/md4 1019M -64Z 1.1G 101% /tmp Filesystem 1K-blocks Used Available Use% Mounted on /dev/md4 1043168 -73786976294838127736 1068904 101% /tmp It looks like Windows 95's FDISK command created the partitions. There is no way you can see that from the output I gave, and it is also incorrect. The partition boundaries still remain where Windows 95 put them, and you have overlapping partitions. fdisk does not create overlapping partitions. Wichert. -- a href=http://www.edusupport.nl;EduSupport: Linux Desktop for schools and small to medium business in The Netherlands and Belgium/a - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LVM2
With using RAID5 you can choose yourself howmany hot standby/failover disks you want to use. The number (or ratio) of disks used for failover of your raid will determine the chance that you have when one disk fails and complete failure of a raid. It is still pretty safe just to have 7 active disks and 1 hot standby disk, thus getting you 3500 Manufacturers GB (3.34TB) About LVM on a RAID: With hardware RAID it will work for sure. With software RAID: Make the RAID partitions and build your RAID. On that RAID, set the partition type to LVM And it should work On LVM you can then use ext3 (your preferred fs) On Thursday 20 January 2005 23:17, you wrote: > It is for a group. For the most part it is data access/retention. Writes > and such would be more similar to a desktop. I would use SATA if they > were (nearly) equally priced and there were awesome 1394 to SATA bridge > chips that worked well with Linux. So, right now, I am looking at ATA to > 1394. > > So, to get 2TB of RAID5 you have 6 500 GB disks right? So, will this > work within on LV? Or is it 2TB of diskspace total? So, are volume > groups pretty fault tolerant if you have a bunch of RAID5 LVs below > them? This is my one worry about this. > > Second, you mentioned file systems. We were talking about ext3. I have > never used any others in Linux (barring ext2, minixfs, and fat). I had > heard XFS from IBM was pretty good. I would rather not use reiserfs. > > Any recommendations. > > Trever > > P.S. Why won't an LV support over 2TB? > > S.P.S. I am not really worried about the boot and programs drive. They > will be spun down most of the time I am sure. > > On Thu, 2005-01-20 at 22:40 +0100, Norbert van Nobelen wrote: > > A logical volume in LVM will not handle more than 2TB. You can tie > > together the LVs in a volume group, thus going over the 2TB limit. Choose > > your filesystem well though, some have a 2TB limit too. > > > > Disk size: What are you doing with it. 500GB disks are ATA (maybe SATA). > > ATA is good for low end servers or near line storage, SATA can be used > > equally to SCSI (I am going to suffer for this remark). > > > > RAID5 in software works pretty good (survived a failed disk, and > > recovered another failing raid in 1 month). Hardware is better since you > > don't have a boot partition left which is usually just present on one > > disk (you can mirror that yourself ofcourse). > > > > Regards, > > > > Norbert van Nobelen > > > > On Thursday 20 January 2005 20:51, you wrote: > > > I recently saw Alan Cox say on this list that LVM won't handle more > > > than 2 terabytes. Is this LVM2 or LVM? What is the maximum amount of > > > disk space LVM2 (or any other RAID/MIRROR capable technology that is in > > > Linus's kernel) handle? I am talking with various people and we are > > > looking at Samba on Linux to do several different namespaces (obviously > > > one tree), most averaging about 3 terabytes, but one would have in > > > excess of 20 terabytes. We are looking at using 320 to 500 gigabyte > > > drives in these arrays. (How? IEEE-1394. Which brings a question I will > > > ask in a second email.) > > > > > > Is RAID 5 all that bad using this software method? Is RAID 5 available? > > > > > > Trever Adams > > > -- > > > "They that can give up essential liberty to obtain a little temporary > > > safety deserve neither liberty nor safety." -- Benjamin Franklin, 1759 > > > > > > - > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" > > > in the body of a message to [EMAIL PROTECTED] > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > Please read the FAQ at http://www.tux.org/lkml/ > > > > - > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" > > in the body of a message to [EMAIL PROTECTED] > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > Please read the FAQ at http://www.tux.org/lkml/ > > -- > "Assassination is the extreme form of censorship." -- George Bernard > Shaw (1856-1950) -- http://www.edusupport.nl;>EduSupport: Linux Desktop for schools and small to medium business in The Netherlands and Belgium - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LVM2
Even as LVM user, guess what I used before answering (-: On Thursday 20 January 2005 23:34, Alasdair G Kergon wrote: > On Thu, Jan 20, 2005 at 03:22:14PM -0700, Trever L. Adams wrote: > > PV = the device > > VG = groups of them (the RAID5 array?) > > LV = what? the file system? > > http://www.tldp.org/HOWTO/LVM-HOWTO/anatomy.html > http://www.novell.com/products/linuxenterpriseserver8/whitepapers/LVM.pdf > [Out-of-date now, but descriptions of concepts still useful.] > > LVM mailing list for LVM questions: > https://www.redhat.com/mailman/listinfo/linux-lvm > > Alasdair - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LVM2
Even as LVM user, guess what I used before answering (-: On Thursday 20 January 2005 23:34, Alasdair G Kergon wrote: On Thu, Jan 20, 2005 at 03:22:14PM -0700, Trever L. Adams wrote: PV = the device VG = groups of them (the RAID5 array?) LV = what? the file system? http://www.tldp.org/HOWTO/LVM-HOWTO/anatomy.html http://www.novell.com/products/linuxenterpriseserver8/whitepapers/LVM.pdf [Out-of-date now, but descriptions of concepts still useful.] LVM mailing list for LVM questions: https://www.redhat.com/mailman/listinfo/linux-lvm Alasdair - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LVM2
With using RAID5 you can choose yourself howmany hot standby/failover disks you want to use. The number (or ratio) of disks used for failover of your raid will determine the chance that you have when one disk fails and complete failure of a raid. It is still pretty safe just to have 7 active disks and 1 hot standby disk, thus getting you 3500 Manufacturers GB (3.34TB) About LVM on a RAID: With hardware RAID it will work for sure. With software RAID: Make the RAID partitions and build your RAID. On that RAID, set the partition type to LVM And it should work On LVM you can then use ext3 (your preferred fs) On Thursday 20 January 2005 23:17, you wrote: It is for a group. For the most part it is data access/retention. Writes and such would be more similar to a desktop. I would use SATA if they were (nearly) equally priced and there were awesome 1394 to SATA bridge chips that worked well with Linux. So, right now, I am looking at ATA to 1394. So, to get 2TB of RAID5 you have 6 500 GB disks right? So, will this work within on LV? Or is it 2TB of diskspace total? So, are volume groups pretty fault tolerant if you have a bunch of RAID5 LVs below them? This is my one worry about this. Second, you mentioned file systems. We were talking about ext3. I have never used any others in Linux (barring ext2, minixfs, and fat). I had heard XFS from IBM was pretty good. I would rather not use reiserfs. Any recommendations. Trever P.S. Why won't an LV support over 2TB? S.P.S. I am not really worried about the boot and programs drive. They will be spun down most of the time I am sure. On Thu, 2005-01-20 at 22:40 +0100, Norbert van Nobelen wrote: A logical volume in LVM will not handle more than 2TB. You can tie together the LVs in a volume group, thus going over the 2TB limit. Choose your filesystem well though, some have a 2TB limit too. Disk size: What are you doing with it. 500GB disks are ATA (maybe SATA). ATA is good for low end servers or near line storage, SATA can be used equally to SCSI (I am going to suffer for this remark). RAID5 in software works pretty good (survived a failed disk, and recovered another failing raid in 1 month). Hardware is better since you don't have a boot partition left which is usually just present on one disk (you can mirror that yourself ofcourse). Regards, Norbert van Nobelen On Thursday 20 January 2005 20:51, you wrote: I recently saw Alan Cox say on this list that LVM won't handle more than 2 terabytes. Is this LVM2 or LVM? What is the maximum amount of disk space LVM2 (or any other RAID/MIRROR capable technology that is in Linus's kernel) handle? I am talking with various people and we are looking at Samba on Linux to do several different namespaces (obviously one tree), most averaging about 3 terabytes, but one would have in excess of 20 terabytes. We are looking at using 320 to 500 gigabyte drives in these arrays. (How? IEEE-1394. Which brings a question I will ask in a second email.) Is RAID 5 all that bad using this software method? Is RAID 5 available? Trever Adams -- They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -- Benjamin Franklin, 1759 - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ -- Assassination is the extreme form of censorship. -- George Bernard Shaw (1856-1950) -- a href=http://www.edusupport.nl;EduSupport: Linux Desktop for schools and small to medium business in The Netherlands and Belgium/a - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LVM2
A logical volume in LVM will not handle more than 2TB. You can tie together the LVs in a volume group, thus going over the 2TB limit. Choose your filesystem well though, some have a 2TB limit too. Disk size: What are you doing with it. 500GB disks are ATA (maybe SATA). ATA is good for low end servers or near line storage, SATA can be used equally to SCSI (I am going to suffer for this remark). RAID5 in software works pretty good (survived a failed disk, and recovered another failing raid in 1 month). Hardware is better since you don't have a boot partition left which is usually just present on one disk (you can mirror that yourself ofcourse). Regards, Norbert van Nobelen On Thursday 20 January 2005 20:51, you wrote: > I recently saw Alan Cox say on this list that LVM won't handle more than > 2 terabytes. Is this LVM2 or LVM? What is the maximum amount of disk > space LVM2 (or any other RAID/MIRROR capable technology that is in > Linus's kernel) handle? I am talking with various people and we are > looking at Samba on Linux to do several different namespaces (obviously > one tree), most averaging about 3 terabytes, but one would have in > excess of 20 terabytes. We are looking at using 320 to 500 gigabyte > drives in these arrays. (How? IEEE-1394. Which brings a question I will > ask in a second email.) > > Is RAID 5 all that bad using this software method? Is RAID 5 available? > > Trever Adams > -- > "They that can give up essential liberty to obtain a little temporary > safety deserve neither liberty nor safety." -- Benjamin Franklin, 1759 > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LVM2
A logical volume in LVM will not handle more than 2TB. You can tie together the LVs in a volume group, thus going over the 2TB limit. Choose your filesystem well though, some have a 2TB limit too. Disk size: What are you doing with it. 500GB disks are ATA (maybe SATA). ATA is good for low end servers or near line storage, SATA can be used equally to SCSI (I am going to suffer for this remark). RAID5 in software works pretty good (survived a failed disk, and recovered another failing raid in 1 month). Hardware is better since you don't have a boot partition left which is usually just present on one disk (you can mirror that yourself ofcourse). Regards, Norbert van Nobelen On Thursday 20 January 2005 20:51, you wrote: I recently saw Alan Cox say on this list that LVM won't handle more than 2 terabytes. Is this LVM2 or LVM? What is the maximum amount of disk space LVM2 (or any other RAID/MIRROR capable technology that is in Linus's kernel) handle? I am talking with various people and we are looking at Samba on Linux to do several different namespaces (obviously one tree), most averaging about 3 terabytes, but one would have in excess of 20 terabytes. We are looking at using 320 to 500 gigabyte drives in these arrays. (How? IEEE-1394. Which brings a question I will ask in a second email.) Is RAID 5 all that bad using this software method? Is RAID 5 available? Trever Adams -- They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -- Benjamin Franklin, 1759 - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Mysterious Lag With SATA Maxtor 250GB 7200RPM 16MB Cache Under Linux using NFSv3 UDP
Only with NFS? I have a raid array of the same discs and the system just sometimes seems to hang completely (for a second or less) and then to go on again at a normal speed (110MB/s). I am running a SuSE 9.1 stock kernel (2.6.5-7.111-smp) on that machine. On Monday 17 January 2005 21:06, you wrote: > When writing to or from the drive via NFS, after 1GB or 2GB, it "feels" > like the system slows to a crawl, the mouse gets very slow, almost like > one is burning a CD at 52X under PIO mode. I originally had this disk in > my main system with an Intel ICH5 chipset (ABIT IC7-G mobo) and a Pentium > 4 2.6GHZ w/HT and I could not believe the hit the system took when writing > over NFS. > > I put the drive on its own controller in another box to see if I could > replicate the problem. The drive is on a Promise 150TX2 Plus controller. > When writing locally on the disk, there is no problem. When writing to or > from the drive over NFS, the system becomes _very_ slow, df -h lags, the > system slows almost to a halt. I check /proc/interrupts and > /var/log/messages but no errors result. > > There is some type of system bottleneck when using NFS with this drive. > > All the file systems are XFS on all systems. > Can anyone offer any suggestions? > I cannot explain or find the cause of this. > > All systems are running 2.6.10. > > I have an excerpt of both using dd if=/dev/zero of=file.img (locally) and > from a remote computer (on a full duplex gigabit network) > > Locally: dd if=/dev/zero of=out.img > > vmstat output: > > procs ---memory-- ---swap-- -io --system-- > cpu > r b swpd free buff cache si sobibo incs us sy > id wa > 1 0412 4928 0 66261200 0 0 1065 285 16 84 > 0 0 > 1 0412 4608 0 66255600 0 78932 1148 250 14 86 > 0 0 > 1 0412 4672 0 66286800 013 1068 274 16 84 > 0 0 > 1 0412 5120 0 66164400 0 78928 1116 245 13 87 > 0 0 > 1 0412 4460 0 66302000 0 0 1098 254 15 85 > 0 0 > 1 0412 5376 0 66093600 0 78932 1073 279 14 86 > 0 0 > 1 0412 5376 0 66212400 017 1147 267 18 82 > 0 0 > 2 0412 5376 0 6615800044 31901 1052 286 15 85 > 0 0 > 1 0412 5440 0 6617400048 47048 1168 265 15 85 > 0 0 > 1 0412 5312 0 66215200 414 1048 260 16 84 > 0 0 > 1 0412 4664 0 66219600 4 78941 1143 282 15 85 > 0 0 > 2 0412 4600 0 66293200 0 0 1104 251 14 86 > 0 0 > 1 0412 5048 0 66144800 0 78954 269 16 84 > 0 0 > 1 0412 5368 0 66212000 0 0 1140 250 17 83 > 0 0 > 1 0412 5432 0 66064800 0 78928 1075 260 16 84 > 0 0 > 1 0412 5176 0 66222400 4 0 1217 266 14 86 > 0 0 > 1 0412 5432 0 66210400 0 0 1078 274 16 84 > 0 0 > 1 0412 4984 0 66208400 0 78932 1189 245 12 88 > 0 0 > 1 0412 5440 0 66201200 013 1080 271 17 83 > 0 0 > 1 0412 5440 0 66116800 0 78928 1109 242 14 86 > 0 0 > 1 0412 5248 0 66223200 0 0 1104 270 13 87 > 0 0 > > Remotely (writing over NFS, using NFS v3, not using direct I/O) > > vmstat output: > > procs ---memory-- ---swap-- -io --system-- > cpu > r b swpd free buff cache si sobibo incs us sy > id wa > 1 0596 4624 0 66246000 0 448 4037 1099 1 99 > 0 0 > 1 0596 4796 0 66156400 0 96067 2877 788 1 75 > 0 24 > 0 0596 5496 0 66169200 0 160 2297 517 0 44 > 56 0 > 1 0596 4824 0 6628400024 633 3896 1147 1 98 > 1 0 > 1 0596 4696 0 66253600 0 416 3987 1075 1 99 > 0 0 > 1 0596 4892 0 66234400 0 448 3746 1142 0 99 > 1 0 > 0 0596 5868 0 66046800 0 96064 3550 973 1 97 > 1 1 > 2 0596 4600 0 66332800 0 318 3390 823 1 71 > 28 0 > 1 0596 5432 0 66233600 0 448 3872 1066 0 100 > 0 0 > 1 0596 5360 0 66188800 0 453 3951 1144 1 99 > 0 0 > 1 0596 4936 0 66156800 0 94225 2942 726 0 75 > 7 18 > 1 0596 4528 0 66294400 0 259 2984 738 1 64 > 35 0 > 1 0596 5368 0 66186400 8 639 4234 1116 1 99 > 0 0 > 2 0596 5368 0 66214000 0 448 3919 1086 0 100 > 0 0 > 0 0596 5512 0 66286000 4 256 2747 650 0 56 > 43 1 > 0 0596 5520 0 662860
Re: Mysterious Lag With SATA Maxtor 250GB 7200RPM 16MB Cache Under Linux using NFSv3 UDP
Only with NFS? I have a raid array of the same discs and the system just sometimes seems to hang completely (for a second or less) and then to go on again at a normal speed (110MB/s). I am running a SuSE 9.1 stock kernel (2.6.5-7.111-smp) on that machine. On Monday 17 January 2005 21:06, you wrote: When writing to or from the drive via NFS, after 1GB or 2GB, it feels like the system slows to a crawl, the mouse gets very slow, almost like one is burning a CD at 52X under PIO mode. I originally had this disk in my main system with an Intel ICH5 chipset (ABIT IC7-G mobo) and a Pentium 4 2.6GHZ w/HT and I could not believe the hit the system took when writing over NFS. I put the drive on its own controller in another box to see if I could replicate the problem. The drive is on a Promise 150TX2 Plus controller. When writing locally on the disk, there is no problem. When writing to or from the drive over NFS, the system becomes _very_ slow, df -h lags, the system slows almost to a halt. I check /proc/interrupts and /var/log/messages but no errors result. There is some type of system bottleneck when using NFS with this drive. All the file systems are XFS on all systems. Can anyone offer any suggestions? I cannot explain or find the cause of this. All systems are running 2.6.10. I have an excerpt of both using dd if=/dev/zero of=file.img (locally) and from a remote computer (on a full duplex gigabit network) Locally: dd if=/dev/zero of=out.img vmstat output: procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 1 0412 4928 0 66261200 0 0 1065 285 16 84 0 0 1 0412 4608 0 66255600 0 78932 1148 250 14 86 0 0 1 0412 4672 0 66286800 013 1068 274 16 84 0 0 1 0412 5120 0 66164400 0 78928 1116 245 13 87 0 0 1 0412 4460 0 66302000 0 0 1098 254 15 85 0 0 1 0412 5376 0 66093600 0 78932 1073 279 14 86 0 0 1 0412 5376 0 66212400 017 1147 267 18 82 0 0 2 0412 5376 0 6615800044 31901 1052 286 15 85 0 0 1 0412 5440 0 6617400048 47048 1168 265 15 85 0 0 1 0412 5312 0 66215200 414 1048 260 16 84 0 0 1 0412 4664 0 66219600 4 78941 1143 282 15 85 0 0 2 0412 4600 0 66293200 0 0 1104 251 14 86 0 0 1 0412 5048 0 66144800 0 78954 269 16 84 0 0 1 0412 5368 0 66212000 0 0 1140 250 17 83 0 0 1 0412 5432 0 66064800 0 78928 1075 260 16 84 0 0 1 0412 5176 0 66222400 4 0 1217 266 14 86 0 0 1 0412 5432 0 66210400 0 0 1078 274 16 84 0 0 1 0412 4984 0 66208400 0 78932 1189 245 12 88 0 0 1 0412 5440 0 66201200 013 1080 271 17 83 0 0 1 0412 5440 0 66116800 0 78928 1109 242 14 86 0 0 1 0412 5248 0 66223200 0 0 1104 270 13 87 0 0 Remotely (writing over NFS, using NFS v3, not using direct I/O) vmstat output: procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 1 0596 4624 0 66246000 0 448 4037 1099 1 99 0 0 1 0596 4796 0 66156400 0 96067 2877 788 1 75 0 24 0 0596 5496 0 66169200 0 160 2297 517 0 44 56 0 1 0596 4824 0 6628400024 633 3896 1147 1 98 1 0 1 0596 4696 0 66253600 0 416 3987 1075 1 99 0 0 1 0596 4892 0 66234400 0 448 3746 1142 0 99 1 0 0 0596 5868 0 66046800 0 96064 3550 973 1 97 1 1 2 0596 4600 0 66332800 0 318 3390 823 1 71 28 0 1 0596 5432 0 66233600 0 448 3872 1066 0 100 0 0 1 0596 5360 0 66188800 0 453 3951 1144 1 99 0 0 1 0596 4936 0 66156800 0 94225 2942 726 0 75 7 18 1 0596 4528 0 66294400 0 259 2984 738 1 64 35 0 1 0596 5368 0 66186400 8 639 4234 1116 1 99 0 0 2 0596 5368 0 66214000 0 448 3919 1086 0 100 0 0 0 0596 5512 0 66286000 4 256 2747 650 0 56 43 1 0 0596 5520 0 66286000 0 0 1061 144 3 2 95 0 1 0596 4688 0 66340400 064 1455 312 0