Can I help with solving this?
I have this problem now - I would like to create fake raid10 4TB (dual boot).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise
Ghah! After pressing "Post Comment" also found this firmer confirmation
of (part of) the algorithm:
"Solution: From 0-2 TB the sector size is 512k. From 2-4 TB the sector
size is 1028k. Then from 4 + it changes the sector size to 2048k thats
why the information is displayed in as unallocated. Foll
Due to this issue being brought to IRC #ubuntu I did some background
research to try to confirm Danny's theory about sector-size.
So far the best resource I've found in the Promise Knowledge base
(kb.promise.com) is:
https://kb.promise.com/thread/how-do-i-create-an-array-larger-than-2tb-
for-wind
You should bear in mind that fakeraid puts your data at risk. In the
event of a crash or power failure, some data can be written to one disk
and not the other. When the system comes back up, a proper raid system
will copy everything from the primary to the secondary disk, or at least
the parts of
It's been 2 years, 8 months, 20 days since Danny Wood last posted in
this thread. Just quickly, really appreciate your efforts attempting to
fix this problem, without even having the hardware. That's dedicated.
I've just set up a 2x4TB RAID1 mirror in Windows, which of course leads
me to this th
Hi Vertago1,
Yes the patch appeared to work, we merged it to the Ubuntu dev packages and it
worked for some people.
The sector size was still an issue in some setups as windows appeared to use
both 512 and 1024 byte sectors sizes.
However once we hit the release we quite a few people then repor
I was able to build the dmraid packages with Danny's patch:
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/599255/+attachment/3428773/+files
/26_pdc-large-array-support.patch
After installing them I am able to see my ntfs volumes. I mounted the
largest read only and I was able to read the f
Well I figure it might be useful to start collecting samples of metadata from
different arrays using the pdc part of dmraid. I have two machines with
different chipsets one has a 1.7TB striped volume the other a 3.7TB striped
volume.
I created these dumps by running:
sudo dmraid -rD /dev/sda
cd
** Attachment added: "M5A99X_EVO_R20_3.7tb.hex"
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/599255/+attachment/4103824/+files/M5A99X_EVO_R20_3.7tb.hex
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad
I'm not sure why you can't build it, but the part of the source of most
interest is pdc.c. The problem is that promise has never provided
specifications for the format, so it was reverse engineered. The other
problem is that it looks like the Windows driver pretends the disk has a
larger sector s
I have setup a build environment for dmraid and will start looking
through it to get an idea of whether or not I could contribute a patch.
Any advice on where to start or on what documentation would be useful
would be appreciated.
--
You received this bug notification because you are a member of
I am trying to setup a build environment to troubleshoot the bug, but
the typical package build process is failing:
I ran:
sudo apt-get build-dep dmraid
apt-get source dmraid
cd dmraid-1.0.0.rc16
dpkg-buildpackage -b -uc 1> log.txt 2>&1
It fails with:
Now at patch 27_ignore-too-small-devices.patc
If you have a pdc volume that is over 2TiB, then yes.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger than 32-bits
To manage notifi
I believe I am affected by this bug, but I wanted to check to see if I
am having the same issue.
I have an amd 990X chipset which uses SB950, according to
http://www.redhat.com/archives/ataraid-list/2012-March/msg1.html it
is probably a Promise controller.
I have two 2TB disks in RAID0 which
Bug #1089096 may be a duplicate of this bug.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger than 32-bits
To manage notifications a
Linux understands GPT just fine, but ldm *is* "dynamic disks", so if you
tried to use that to glue them back together, then linux would not
understand it.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/
This bug is ancient, and perhaps nobody cares anymore, but I've figured
out a bit more about where we are left with respect to this.
dmraid userland always assumes that the sector size is 512. It is a
hard-coded constant value.
Meanwhile, in kernel land, dm devices always map their sector sizes,
Oh yes, of course... I thought it was a given that this is pdc specific
behavior.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger th
Sorry Phillip if I wasn't clear, what I meant to say was that with
virtual drives in both virtualbox and qemu windows 7 created a GPT with
a 512 bytes per sector size no matter the drive size.
So I concluded that it must be the promise raid driver itself that
creates the larger sector size which w
You contradicted yourself there Danny. If they always have a sector
size of 512 bytes then we wouldn't have anything to fix. You must have
meant that the larger arrays have larger sector size.
And yea, I can't see where you set the sector size, so I posted a
question to the ataraid mailing list
I can't see where dmraid advertises its sector size!
Phillip do you have any idea?
I did find a thread where someone described the same symptoms of large arrays
on the promise raid controller and the sector counts:
http://ubuntuforums.org/showthread.php?t=1768724
(Phillip you commented on this th
Ok,
After some testing I think I can confirm that the sector size is coming from
the pdc driver and not windows.
All the drives I created of various sizes with windows and gparted show up in
both operating systems and always have a sector size of 512.
So we need to change the sector size adver
64-bit windows 7, yes.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger than 32-bits
To manage notifications about this bug go to:
h
That is interesting.
I have been doing various searches online and can't find any other references
to windows doing this.
Are you using 64-bit windows?
I am just setting up a virtual machine with a rather large virtual drive
to see if I can replicate.
--
You received this bug notification beca
I created a 3TB array, and it does indeed use a sector size of 1024
bytes. I also tried a 4TB and a 5TB array to verify your theory, and it
seems to be correct. The 4TB array is still using a sector size of 1024
bytes, while the 5TB array used 2048.
--
You received this bug notification because y
That confirms that the metadata is not at the start of the disk. It
looks like the problem is just the sector size. Could you try
recreating the array such that the total size is around 3 TB and see if
that gives a sector size of 1k?
--
You received this bug notification because you are a membe
I dumped the first 6 sectors of each individual disk, both with windows
formatting and dmraid formatting. I can't make much out of the data, but
hopefully it's helpful...
** Attachment added: "diskdumps2.tar.gz"
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/599255/+attachment/3431478/+
That is really strange. I did not think Windows could handle non 512
byte sector devices. There does not appear to be any known field in the
pdc header that specifies the sector size. It could be that it just
uses 2k for anything over 2TB. Actually, I wonder if it uses whatever
sector size woul
Does the gparted version work in Ubuntu?
It doesn't appear to have a protective MBR as in the GPT spec but this may not
be an issue.
It appears that windows believes the LBA of the drive is 2048 (0x800)
bytes where as ubuntu thinks it is 512 bytes (0x200) as the GPT header
is located at LBA1.
I
Sorry for the late response, I haven't had access to my computer over
the weekend.
I dumped the first 17kB of the array with the formatting from windows,
and after formatting it with gparted. It would seem the partition table
from windows is offset further into the disk than the one created by
gpa
According to the .offset files in your metadata it was found at offset
0, or the start of the disk. Are you sure this is not where it is at?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title
If you have created a correct GPT then kpartx should find them.
Does dmraid detect the correct RAID layout?
Ie stride size, count, etc.
You need to investigate the partitioning on the disk, you need to make
sure your data is backup up as you are likely to loose partitioning
here.
Dump the curren
I tried to look into calculating the offset, but if I understand the
metadata detection code correctly, it seems that is not the problem I am
having. The metadata for my array is found within the first loop in
pdc_read_metadata, as an offset of end_sectors, so I assume it is at the
end of the disk.
Looking back I think this was the issue Nishihama Kenkowo had with the
original patch.
Sorry if you are already working on this offset issue but I thought I
would add some thoughts.
Looking through the dmraid code I cannot see where it would add an offset.
Would the offset simply be the metadata
It appears that on smaller arrays, the pdc metadata is in a sector near
the end of the drive, but on the larger ones it is at the beginning.
Since the metadata is at the start of the drive, that should require
adding some offset before the first raid stripe, which dmraid does not
seem to have done.
# dmsetup table
pdc_bdfcfaebcj: 0 1562436 striped 4 256 8:0 0 8:16 0 8:32 0 8:80 0
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larg
What does dmsetup table show?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger than 32-bits
To manage notifications about this bug g
Sorry about the sector counts, I did the calculations again, and it
seems that the sector count in the metadata is probably correct. I got
the disk size in megabytes from windows disk manager, and calculated the
sector count from that, but since the disk size is rounded to megabytes
and the sector
How did you determine the disk size?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger than 32-bits
To manage notifications about thi
I have been doing some testing with Danny's patch, and it seems
something is still missing... The patch works fine, but the sector
counts in the metadata don't quite add up, and I still cannot get the
array to work.
I did some calculations based on the disk size, and it seems with the
8TB array th
Hi Phillip,
Attached is a patch that should fix the issue based on the ubuntu 12.10 version
of dmraid.
It compiles but is untested, are you able to test this for me?
Do you need me to create a debdiff or is it easy for you to do?
I haven't had my build environment set up at home since I first at
Good eye! I was comparing those two sets of metadata trying to find a
location that appeared to have the correct value in both cases but
missed that.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/5992
Excellent, thank you for doing that.
I will cook up a patch later, similar to my old one, that uses this new
offset.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read
I created the arrays you asked for, and it seems 0x2E8 is indeed the
correct location. The values I got are 0x00, 0x01, 0x02 and 0x03 as
assumed.
** Attachment added: "metatadata.tar.gz"
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/599255/+attachment/3427586/+files/metatadata.tar.gz
Henry if you manage to backup your data you could confirm this if you
create several different sized arrays.
2TB will create 0x at 0x2E8
3TB will create 0x0001 at 0x2E8
6TB will create 0x0002 at 0x2E8
8TB will create 0x0003 at 0x2E8
After each array creation dump the meta data and post please
Metadata from here also seems to agree:
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/770600/+attachment/2094374/+files/dmraid.pdc.tar.gz
His has high bits of 0x at 0x2E8 for a 2TB array
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribe
Hi Phillip and Henry,
I have taken a quick look at this and compared the latest metadata with
Nishes from before and it looks like the offset for the high bits might
actually be at 0x2E8 (in filler 2).
Basically we have 3 metadata sets in this bug report.
Nishes exist in metadata.tar.gz, the fir
I agree, the high bits are either not stored at all, or they are stored
in the area dmraid reads as filler2, which seems unlikely (I assume the
high byte for my array should be 0x03). The problem with calculating the
size by using the sector counts of each disk is, that the resulting size
seems to
Would it be possible for you to rebuild the array using only 3 drives,
and capture that metadata?
Looking over that first set of metadata, I am starting to think that the
higher order bits simply are not stored at all, and the total size
simply must be computed using the size of each disk and the
# fdisk -lu (all 4 disks have exactly the same sizes)
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096
Also could you boot into windows and find out what it thinks the exact
sector count of the array is?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sec
Can you post the output of fdisk -lu or otherwise list the exact sector
count of the drives?
** Changed in: dmraid (Ubuntu)
Status: Fix Released => Triaged
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad
Can this bug be reopened, since the original fix was reverted, and the
problem still exists? I have a 8TB pdc raid set that I have run into
this issue with, and have been trying to fix it... I'd be happy to help,
if anyone more familiar with dmraid wants to try fix this aswell.
I have attached a m
Hi Phillip Susi
I have the same problem like this.
kim@kim-desktop:~$ sudo dmraid -s
[sudo] password for kim:
*** Active Set
name : pdc_bbjaiahci
size : 3518828800
stride : 128
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0
kim@kim-desktop:~$ dmraid -V
dm
@ Phillip Susi
> What program is this? It appears to be buggy so you should file a bug
> against that package.
Ok. Will do. Using the Disk Utility that comes with Ubuntu 10.10+
> This error is unrelated to this bug report though. Also your array is
> so large that it must use GPT instead of the M
On 6/22/2011 5:08 AM, mercury80 wrote:
> Error creating partition table: helper exited with exit code 1: In
> part_create_partition_table: device_file=/dev/dm-0, scheme=0
> got it
> got disk
> committed to disk
> BLKRRPART ioctl failed for /dev/dm-0: Invalid argument
What program is this? It app
I am running dmraid - 1.0.0.rc16-4.1ubuntu3.
When i try to format a striped 2x2TB raid, this is the result:
Error creating partition table: helper exited with exit code 1: In
part_create_partition_table: device_file=/dev/dm-0, scheme=0
got it
got disk
committed to disk
BLKRRPART ioctl failed for
This bug was fixed in the package dmraid - 1.0.0.rc16-4.1ubuntu2
---
dmraid (1.0.0.rc16-4.1ubuntu2) natty; urgency=low
* Added 21_fix_testing.patch: Testing with dm devices was failing
on Ubuntu because /dev/dm-X is the actual device node, but the
code wanted it to be a syml
No problem, I just wanted to make sure you didn't have reason to think
the field shouldn't be reduced to 8 bits.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read prom
The documentation is unavailable so it was found through
experimentation, the only bits in the metadata that were free and
happened to be the correct values were these ones.
Thats why I made my comments about testing in post 87.
The upper 8 could be used for anything, I guess they just happened t
Danny, how did you discover this upper 16 bits of size? Was it from
experimentation or from some documentation? I ask because I have been
working with a sample of pdc metadata from another bug and found that
this patch identified a value of total_secs_h of 256 when it should be
0. This makes me thi
I must have been drunk by the time I posted that last night. I got the
same wrong results as Nishihama. I've cleaned up the patch today and
added my own and now I get correct results:
dmraid -s
*** Set
name : pdc_cdfjcjhfhe
size : 3906249984
stride : 128
type : stripe
status : ok
subsets:
I don't mind at all Phillip.
Do what you like!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger than 32-bits
--
ubuntu-bugs mailing
No, the windows fakeraid drivers load and bind only to the specific
fakeraid hardware they were designed for. I suppose if you can
configure the virtual machine to use the correct PCI ID of the fakeraid
instead of the usual generic ACHI ID then it should work.
I think I'm going to clean this patc
Hi Phillip,
Sorry for the late response, I don't get much time for launchpad these
days.
The jmicron name fixing patch is because I have jmicron raid on my testing
machine and its running 10.04.
Interestingly I tried 10.10 the other day and that patch had been dropped. I
think my jmicron patch
There seem to be some unrelated changes that should be discarded:
1) You add 21_fix_jmicron_naming.patch to debian/patches/series
2) autoconf/config.sub and config.guess were touched, probably from autoreconf
I just wanted to make sure that these weren't intentional.
--
You received this bug
Looks like it works:
*** Set
name : pdc_cdfjcjhfhe
size : 3906249984
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
*** Set
name : pdc_cdgjdcefic
size : 585891840
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
--
You received this bu
Nevermind, I actually read the script and figured it out. Maybe you
should forward the patch upstream for review?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read pr
Ok, I think I am starting to see now. This PDC format is just really
bad. Instead of having a single record that can define more than one
array, and specifies the region of interest on each component disk like
some of the more sane formats, it just defines additional complete
records with all of
I am not sure what you mean by second raid set. The pdc format only
defines a single raid set with up to 8 disks. In the original size I
see 00a5 d4e8, and at offset 232 where your patch defines to be the
upper 16 bits I see 00 00.
--
You received this bug notification because you are a member
I was wondering if that was possible.
metadata.tar.gz is a full dump.
You should see a second raid set which is 300 GB or so, this is supposed to be
2.5TB but has the top bits truncated.
With my patch it detects the raid set correctly but windows was using a larger
sector size and so Ubuntu and
The metadata in dmraid-pdc.tar is for an array that is smaller than 2TB,
so does not overflow the 32bit sector count. Do we not have a sample of
the metadata from a raid suffering from this problem?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribe
Danny, you don't need a 2tb drive to debug this. You can either use a
virtual machine or the loopback driver. I didn't notice that there was
a metadata sample attached to this bug report. I might take a look at
it.
--
You received this bug notification because you are a member of Ubuntu
Bugs,
If the 'normal' drive doesn't have any raid metadata, ie not been used in a
fakeraid before, then you shouldn't suffer from this bug.
This bug is primarily to do with the metadata not being read properly by dmraid
and so the device isn't exposed properly to the rest of the system.
In particular
Btw. if my bug is really related to this then I doubt that it is a
dmraid bug but more likely a kernel driver issue..?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to rea
Is my issue related to this bug?
I am using the SB750 in RAID mode as I have one RAID1 array. All other drives
are "normal" drives.
I now tried to add a "normal" 3TB drive and although I am able to create a GPT
and a partition on Windows or Linux, the GPT isn't non-existent in the other
OS, e.g.
Unfortunately no I didn't. I don't have the actual promise hardware and so
debugging this issue was very hard.
Nishihama Kenkowo helped me a lot but I never completed the work. Debugging
hardware is much easier when it is sat infront of you.
I think I was close but I decided to give up as I coul
** Also affects: baltix
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/599255
Title:
dmraid fails to read promise RAID sector count larger than 32-bits
--
Danny, it sounds like you found and fixed the problem from your
comments. Can you post the patch so we can put this one to bed?
** Tags removed: 2tb dmraid patch
** Changed in: dmraid (Ubuntu)
Status: New => Triaged
** Changed in: dmraid (Ubuntu)
Importance: Undecided => Medium
--
I had seen SB7*0 debelopment guide. it was difficult for me to undaerstand.
http://support.amd.com/us/Embedded_TechDocs/43366_sb7xx_bdg_pub_1.00.pdf
By the way. These bugs are in All Linux distro(CentOS,Redhat6.0
beta&Current,Fedora).
And like Acronis products.
if No one would do the bug fix. C
I swaped motherboard, again.restored to
and Correction
RAIDBIOS VERSION
M4A78-EM/1394 AMD790GX +SB750 raidbios 3.0.1540.39
M3A78-TAMD780G +SB710 raidbios 3.0.1540.39 3.0.1540.59
--
dmraid fails to read promise RAID sector count larger than 32-bits
https://bugs.launchpad.net/bugs/5
add information:
I swaped motherboard from asus M4A78-EM/1394 to asus M3A78-T.
There have almost compatible raid chip SB7xx.
asus M4A78-EM/1394 AMD790GX +SB750 raidbios 3.0.1540.59 and 3.0.1540.39(both I
tested).
asus M3A78-T AMD780G+SB710 raidbios now,I can not see.
I get sa
I wonder that "MBR is correct." is my opnion.
sudo dd if=/dev/mapper/pdc_befgjjibfc of=linmbr2.img bs=512 count=1
In this my environment ,
Sometimes strange phenomenon occurs nearly 1st july .
The command(sudo grub-install /dev/mappaer/myraid1 ) may be viable, sometimes
not.
The operation is
Well obviously something isn't reading the MBR correctly.
The MBR is read by the dmraid code so this could be why.
Could you dump the MBR again and post it up?
sudo dd if=/dev/mapper/pdc_cdgjdcefic of=linmbr2.img bs=512 count=1
--
dmraid fails to read promise RAID sector count larger than 32-bit
On ubuntu.gparted .
http://dl.dropbox.com/u/6626165/Screenshot--dev-mapper-pdc_befgjjibfc%20-%20GParted.png
Numbers are just half the number originally to be presented. This is
simply a miscalculation. MBR is correct.
--
dmraid fails to read promise RAID sector count larger than 32-bits
https://
thanks.
>So you have 2 x 2TB and 1 x 500GB drives.
Yes, that is good your idea. However,I tryed last week.
I Can create two array, can not create three array.
It is limitatations of this raid-bios. only two array. (this asus motherboard)
Maybe I saw hope.
I initialized 2.09TiB as mbr. and create
** Tags added: patch
--
dmraid fails to read promise RAID sector count larger than 32-bits
https://bugs.launchpad.net/bugs/599255
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
h
For completeness I am attaching the debdiff for my attempt at enabling the
extended LBA.
I think the offset is wrong but it is documented in the header file.
I am sorry we could not fix this.
** Patch added: "not_complete.debdiff"
http://launchpadlibrarian.net/51584787/not_complete.debdiff
-
Hmmm.
I think there is an issue with the offset in that case.
And earlier I was fooling myself by reading the same MBR back twice.
Its hard to reverse engineer over a long distance. If I could find a
cheap promise controller I would buy one to have a go at fixing this but
unfortunately it looks l
Next ,I boot Win7x64.
on management of computer.
there anre 2nd raid array. basic.normal 2142.97GB RAW DISK. not ntfs. so I can
not see my pictures which put on ubuntu.
--
World Cup TV but also watch because, abacus it, I should go back to its core
business, next monday.
Very grateful to Danny.
I boot WinXP32.
on management of computer.
there are 2nd raid array. basic, norml.4095.99GB. <---Abnormal value.
--
dmraid fails to read promise RAID sector count larger than 32-bits
https://bugs.launchpad.net/bugs/599255
You received this bug notification because you are a member of Ubuntu
Bugs,
And, I initilized as windows told me. MBR.
I created A 2TiB partiation.
I decided to throw away the fraction. 0.09TiB.
and reboot, swich to ubuntu.
#gpated
Strange phenomenon occurs.
There are GPT partiton.
http://dl.dropbox.com/u/6626165/Screenshot--dev-mapper-pdc_befgjjibfc%20-%20GParted.png
E
And reboot, swicth to win7x64.
Diskmanager told me. Which type initialize GPT or MBR. there are no partition
table.
#46 same situation.I try again and again.
And
Going around in circles.
--
dmraid fails to read promise RAID sector count larger than 32-bits
https://bugs.launchpad.net/bugs/5992
(using kpartx which your procedure)
ubutu can create partition on GPT disk(2nd) by gparted.
Disk /dev/mapper/pdc_befgjjibfc: 2300 GB, 2300997404160 bytes
255 heads, 63 sectors/track, 279747 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start
>Anyway you do need to use gpt partitioning to use large volumes like
this.
Previously, as written.
ubuntu can not create GUID partition in my 2nd array.
gparted . and diskutility too.
the other ubuntu can make guid partition table.
(parted) p
>With the NTFS partition you created in Ubuntu is the same partition then
>visible in windows?
Previously, as written.
on Raid array1, Visible,I can use/read/write NTFS which was created by ubuntu
in windows. no problem.Interoperability,each other,ubuntu and windows/XP/7.
on Raid array2, I can
I have done some further digging and it seems that kpartx can read the gpt
partition table from dmraid.
(sudo apt-get install kpartx)
Usage:
kpartx -a /dev/mapper/pdc_cdgjdcefic
Use that command once you have booted or created the gpt structure and
you should then have the /dev/mapper/X block de
Oh dear.
It seems this version of dmraid won't handle gpt!
So you may be a little stuck with using partitions of that size.
There is a thread here where someone has made a patch:
http://ubuntuforums.org/showthread.php?t=1369224
I will have a look at it later to see if I can incorporate it into m
Hmmm, that is interesting.
Both MBRs have the same structure, which means the offset is correct.
I can see one issue though.
In the windows MBR the sector size is listed as 0x7800 = 2147481600
sectors. The normal block size is 512 bytes so 2147481600 x 512 =
1099510579200 = 1TB (This is what
I had Miscalculation.
2097149MiB to 2097152MiB.
The latter setting the new.
I recreated partition. and recreated mbr.
Disk /dev/mapper/pdc_cdgjdcefic: 2498 GB, 2498996344320 bytes
255 heads, 63 sectors/track, 303819 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
1 - 100 of 112 matches
Mail list logo