> 1. On boot up, the screen goes completely black until the xserver is
> started.
KMS provides its own framebuffer console driver -- disable any other
framebuffer drivers such as (u)vesafb and enable
FRAMEBUFFER_CONSOLE_DETECT_PRIMARY under drivers/graphics
support/console display driver support.
> Does anyone has experiences with gparted?
I have no experience with Parted Magic, but I have used a lot the
Gparted live CD (http://gparted.sourceforge.net/livecd.php). No idea on
how the two compare.
As for gparted (which is a lot more than a gui for parted), I have used
it on ext4 a couple of
>> 1- if the root partition is [part of] what you're copying, you
>> *must* mount it read-only (mount -o ro /dev/sdc /work)
>
> Not from my experience; I simply mount, exec, and go - Works fine
Let's say you are 50% done copying a partition, when something writes to
it. If the write only affects
> 1. boot up knoppix
> 2. create a partition: mkdir /work
> 3. mount /work to the root partition: mount /dev/sdc /work
> 4. cd /work/usr/bin
> 5. run dcfldd: ./dcfldd
This is fine, provided that
1- if the root partition is [part of] what you're copying, you *must*
mount it read-only (mount -o ro
> Is there any faster and reliable way to checksum
> whole paritions (not on "per file" base)???
It depends on where your bottleneck is...
If you're cpu-bound you can try with a faster hash: md5sum or even
md4sum would be a good choice (collision resistance is irrelevant in
this application).
Hi,
> I tried these kernels (all vanilla):
> 2.6.32.13
> 2.6.33.5
> 2.6.34.0
So it's not a known problem that has been fixed.
Just a wild guess... can you try recompiling the kernel *without*
pata_via? Some people have reported having problems with sata drives on
VIA controllers when pata_via is
> ata1.00: failed command: READ DMA
> ata1.00: cmd c8/00:80:00:3f:c1/00:00:00:00:00/e0 tag 0 dma 65536 in
> res 51/84:4f:00:3f:c1/00:00:00:00:00/e0 Emask 0x10 (ATA bus error)
> ata1.00: status: { DRDY ERR }
> ata1.00: error: { ICRC ABRT }
> ata1: soft resetting link
> ata1.00: configured f
> I was more thinking of a tool, which test the whole disc surface
> and reports every bad sector.
badblocks -wvs (which takes forever, but in my experience is quite good
at make failing disks actually fail ;)
During the test you can monitor the smart attributes (smartctl -A, esp.
the reallocated
> Are you sure ext[234] is compiled statically into the kernel in this
> .config?
> Also the drivers for the EIDE / SATA controller.
Missing FS and/or controller drivers will result in a regular kernel
boot with a panic at the end, when it's time to mount root and load init.
In this case grubs se
> 1. Are there reliability issues surrounding this technology in Gentoo?
My only experience is with a Gentoo-based iSCSI target (ie. "server");
my clients are windows-based. The system is a low-end Core 2 duo running
the latest stable kernel and Iscsi Enterprise Target; I have been
running this se
Hi,
> The budget is miniscule - and the performance demands
> (bandwidth and latency) are completely non-challenging.
This IMHO pretty much rules out any kind of server-class hardware, which
tends to be both costly and power-hungry. If you're thinking about
buying used stuff, be sure to factor in
> Many years ago I wrote an OS/2 program to handle all of this. Perhaps I
> should blow the dust off it, convert it to use POSIX functions and
> publish it as FOSS.
Why reinvent the wheel? Just use 'sfdisk -d'.
andrea
> The RAID superblock is at the end of the filesystem, to avoid any
> conflicts with the filesystem superblock.
It can be either at the start, at the end or even 4K into the device,
depending on which format (metadata revision) is used. In this case I
suppose it's 0.90, which is stored at the begi
> Agreed, however Iain also said that he tried to mount individual
> partitions and this failed. This should work with RAID1
Only if you force the filesystem type (i.e. mount -t xxx, or use
mount.xxx directly).
However, while I know this works with ext2/ext3/ext4, I have no idea if
xfs is also sm
> md: bind
> md: bind
> md: bind
> raid1: raid set md100 active with 3 out of 4 mirrors
> md: bind
> md: bind
> md: bind
> raid1: raid set md101 active with 3 out of 4 mirrors
AFAICT this is all you need to know -- you definitely have two software
(mdraid) RAID 1 volumes:
md100 with hda2, hde2 a
101 - 115 of 115 matches
Mail list logo