As others have already said:

 

Don't use gpt or partition your disks if you are going to use LVM. LVM runs 
fine directly on top of what your PERC6/E presents the MD1000 disks as.

 

I have a 2950 (and a PERC5/E) connected to a MD1000 that contains 9x 500GB 
(/dev/sdb) and 6x 1TB (/dev/sdc) disks in 2 raid5s. 

 

I originally started out with just the 500GB disks configured in a raid50 
setup, that proved a dead end (no expansion!), so I added the 6 1TB disks and 
placed them in the same VG as the existing 500GB disks, did a single pvmove 
(that took a bit loong) to empty /dev/sdb, did a reorg to a raid5 and added the 
new raid5 back to the existing VG.

 

I don't know if partitioning my original /dev/sdb could have prevented that, 
but I surely didn't need it.

 

And yes - this was all done through SSH and a SSH tunnel to the OMSA 
webinterface where I reconfigured the 500GB disks. Never needed to put a hand 
on the hardware once I had installed the new 1TB disks.

 

Oh, and without partitions I can easily grow all the filesystems when needed - 
without worring about existing partitionsizes - at the moment I've got ~2TB 
unoccupied space that every LVM partition can grab a chunk of when needed (and 
without taking the filesystem offline).

 

Regards,

Jens Dueholm Christensen 
Business Process and Improvement, Rambøll Survey IT



________________________________

From: [email protected] 
[mailto:[email protected]] On Behalf Of Collins, Kevin 
L.
Sent: 10. august 2009 17:53
To: Dell Poweredge list server
Subject: Large File Systems, GPT, MD1000

 

I'm trying to deploy a system comprised of a PE1950 + PERC 6/E + MD1000 with 
14x1 TB drives.  I'm installing Ubuntu Hardy (8.04 LTS) server on the machine.  
Internally the PE1950 has a PERC 6i hooked to two 750GB drives in a RAID 1.  
That's where the OS will live.  The MD1000 is simply gonna be used for storage 
- with LVM and XFS.  Everything is recognized and is working.

 

My problem comes in when I go to access the space on the MD1000.  I created a 
single RAID10 array with a single hot-spare on the unit.  This equates to about 
6 TB of useable space.  But using fdisk, I can only create a _single_ 2 TB 
partition.  I'd obviously like to access all of the 6 TB.

 

So, I've done a bit of research and found that to get past this limit, I need 
to use parted and GPT labeled drives.  Which seems to be the proper way to get 
this done.  But I've never used GPT labeled drives and I'm concerned about 
issues with failures,  recoveries and expansions.

 

Therefore, I'm calling on those with more experience than me to provide some 
guidance.  I know that several people on this list maintain servers that access 
MUCH more space than this 6 TB.  Any advice is welcomed.  

 

Questions:

1). Am I going down the proper road?

2). Is there a better way?

3). What hurdles will I need to overcome?

 

Oh, one other little detail about this server.  When it's deployed it will be 
in an offsite location with only SSH access, so making things easy is the goal 
here.

 

Thanks in advance.

 

--

Kevin L. Collins, MCSE
Systems Manager

 

nesbitt engineering, inc.
227 North Upper Street
Lexington, KY. 40507

 

Direct Line: 859.685.4524
Cell:            859.317.1501

 

_______________________________________________
Linux-PowerEdge mailing list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Reply via email to