Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Pawel Tyll
 Easiest way to create sparse eg 20 GB assuming test.img doesn't exist
 already
No no no. Easiest way to do what you want to do:
mdconfig -a -t malloc -s 3t -u 0
mdconfig -a -t malloc -s 3t -u 1

Just make sure to offline and delete mds ASAP, unless you have 6TB of
RAM waiting to be filled ;) - note that with RAIDZ2 you have no
redundancy with two fake disks gone, and if going with RAIDZ1 this
won't work at all. I can't figure out a safe way (data redundancy all
the way) of doing things with only 2 free disks and 3.5TB data - third
disk would make things easier, fourth would make them trivial; note
that temporary disks 3 and 4 don't have to be 2TB, 1.5TB will do.

I've done this dozen of times, had no problems, no gray hair, and not
a bit of data lost ;)


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: install touching mbr

2010-07-24 Thread S Roberts
Hello,

On Mon, 5 Apr 2010 22:28:25 -0700
Randi Harper ra...@freebsd.org wrote:

 On Sat, Apr 3, 2010 at 8:05 PM, Bruce Cran br...@cran.org.uk wrote:
  On Saturday 03 April 2010 21:58:56 Jeremy Chadwick wrote:
  On Sat, Apr 03, 2010 at 05:48:12PM -0300, Nenhum_de_Nos wrote:
   I just installed a 8.0R amd64 from memstick. when asked, I said
   to leave mbr untouched. when I rebooted, it was freebsd
   bootloader that was on control. this options is not what I think
   it should, or there is really a issue here ?
 
  I can confirm this behaviour.  Someone may have broken something
  when tinkering around in that part of sysinstall (since the
  Standard vs. BootMgr options were moved around compared to
  previous releases).
 
  I have a patch at http://reviews.freebsdish.org/r/15/ waiting to be
  committed. I believe the None option won't change the bootcode
  itself but will still mark the FreeBSD partition as active.
 
  --
  Bruce Cran
 
 I disagree with some of the wording. Specifically, lines 100-102 of
 usr.sbin/sade/menus.c
 
 If you will only have FreeBSD on the machine the boot manager is not
 needed and it slows down the boot while offering you the choice of
 which operating system to boot.
 
 ^^ not 100% true, as the boot manager also provides the option of PXE
 booting. This statement seems excessively wordy and unnecessary.
 
 Also, should this be broken up into two patches? One for the change in
 sade, the other for sysinstall? I'm not picky about this, but you are
 fixing two issues in two separate programs.

Any chance that this patch review was completed, approved and made it
into 8.1 Release?

Thanks.

Regards,

S Roberts

 
 -- randi
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to
 freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Dan Langille

On 7/24/2010 7:56 AM, Pawel Tyll wrote:

Easiest way to create sparse eg 20 GB assuming test.img doesn't exist
already


You trim posts too much... there is no way to compare without opening 
another email.


Adam wrote:


truncate -s 20g test.img
ls -sk test.img
1 test.img




No no no. Easiest way to do what you want to do:
mdconfig -a -t malloc -s 3t -u 0
mdconfig -a -t malloc -s 3t -u 1


In what way is that easier?  Now I have /dev/md0 and /dev/md1 as opposed 
to two sparse files.



Just make sure to offline and delete mds ASAP, unless you have 6TB of
RAM waiting to be filled ;) - note that with RAIDZ2 you have no
redundancy with two fake disks gone, and if going with RAIDZ1 this
won't work at all. I can't figure out a safe way (data redundancy all
the way) of doing things with only 2 free disks and 3.5TB data - third
disk would make things easier, fourth would make them trivial; note
that temporary disks 3 and 4 don't have to be 2TB, 1.5TB will do.


The lack of redundancy is noted and accepted.  Thanks.  :)

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Dan Langille

On 7/22/2010 4:11 AM, Dan Langille wrote:

On 7/22/2010 4:03 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 3:30 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
pci7

atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
pci3

I added ahci_load=YES to loader.conf and rebooted. Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that
the ZFS array will be messed up. But I do plan to do that for the
system after my plan is implemented. Thank you. :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then
with the new (AHCI) stuff. It does perform a bit better and my BIOS
claims it supports hotplug with ahci enabled as well... Still have to
test that.


Well, I don't have anything to support hotplug. All my stuff is
internal.

http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg




The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
backplane, now has none).

I am not certain, but I think with 8.1 (which it's running) and all the
cam integration stuff, hotplug is possible. Is a special backplane
required? I seriously don't know... I'm going to give it a shot though.

Oh, you also might get NCQ. Try:

[r...@h21 /tmp]# camcontrol tags ada0
(pass0:ahcich0:0:0:0): device openings: 32


# camcontrol tags ada0
(pass0:siisch2:0:0:0): device openings: 31

resending with this:

ada{0..4} give the above.

# camcontrol tags ada5
(pass5:ahcich0:0:0:0): device openings: 32

That's part of the gmirror array for the OS, along with ad6 which has
similar output.

And again with this output from one of the ZFS drives:

# camcontrol identify ada0
pass0: Hitachi HDS722020ALA330 JKAOA28A ATA-8 SATA 2.x device
pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)

protocol ATA/ATAPI-8 SATA 2.x
device model Hitachi HDS722020ALA330
firmware revision JKAOA28A
serial number JK1130YAH531ST
WWN 5000cca221d068d5
cylinders 16383
heads 16
sectors/track 63
sector size logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported 3907029168 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6
media RPM 7200

Feature Support Enable Value Vendor
read ahead yes yes
write cache yes yes
flush cache yes yes
overlap no
Tagged Command Queuing (TCQ) no no
Native Command Queuing (NCQ) yes 32 tags
SMART yes yes
microcode download yes yes
security yes no
power management yes yes
advanced power management yes no 0/0x00
automatic acoustic management yes no 254/0xFE 128/0x80
media status notification no no
power-up in Standby yes no
write-read-verify no no 0/0x0
unload no no
free-fall no no
data set management (TRIM) no


Does this support NCQ?

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Jeremy Chadwick
On Sat, Jul 24, 2010 at 12:12:54PM -0400, Dan Langille wrote:
 On 7/22/2010 4:11 AM, Dan Langille wrote:
 On 7/22/2010 4:03 AM, Charles Sprickman wrote:
 On Thu, 22 Jul 2010, Dan Langille wrote:
 
 On 7/22/2010 3:30 AM, Charles Sprickman wrote:
 On Thu, 22 Jul 2010, Dan Langille wrote:
 
 On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:
 On 22.07.2010 10:32, Dan Langille wrote:
 I'm not sure of the criteria, but this is what I'm running:
 
 atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
 0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
 pci7
 
 atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
 0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
 pci3
 
 I added ahci_load=YES to loader.conf and rebooted. Now I see:
 
 You can add siis_load=YES to loader.conf for SiI 3124.
 
 Ahh, thank you.
 
 I'm afraid to do that now, before I label my ZFS drives for fear that
 the ZFS array will be messed up. But I do plan to do that for the
 system after my plan is implemented. Thank you. :)
 
 You may even get hotplug support if you're lucky. :)
 
 I just built a box and gave it a spin with the old ata stuff and then
 with the new (AHCI) stuff. It does perform a bit better and my BIOS
 claims it supports hotplug with ahci enabled as well... Still have to
 test that.
 
 Well, I don't have anything to support hotplug. All my stuff is
 internal.
 
 http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg
 
 
 
 The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
 backplane, now has none).
 
 I am not certain, but I think with 8.1 (which it's running) and all the
 cam integration stuff, hotplug is possible. Is a special backplane
 required? I seriously don't know... I'm going to give it a shot though.
 
 Oh, you also might get NCQ. Try:
 
 [r...@h21 /tmp]# camcontrol tags ada0
 (pass0:ahcich0:0:0:0): device openings: 32
 
 # camcontrol tags ada0
 (pass0:siisch2:0:0:0): device openings: 31
 
 resending with this:
 
 ada{0..4} give the above.
 
 # camcontrol tags ada5
 (pass5:ahcich0:0:0:0): device openings: 32
 
 That's part of the gmirror array for the OS, along with ad6 which has
 similar output.
 
 And again with this output from one of the ZFS drives:
 
 # camcontrol identify ada0
 pass0: Hitachi HDS722020ALA330 JKAOA28A ATA-8 SATA 2.x device
 pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
 
 protocol ATA/ATAPI-8 SATA 2.x
 device model Hitachi HDS722020ALA330
 firmware revision JKAOA28A
 serial number JK1130YAH531ST
 WWN 5000cca221d068d5
 cylinders 16383
 heads 16
 sectors/track 63
 sector size logical 512, physical 512, offset 0
 LBA supported 268435455 sectors
 LBA48 supported 3907029168 sectors
 PIO supported PIO4
 DMA supported WDMA2 UDMA6
 media RPM 7200
 
 Feature Support Enable Value Vendor
 read ahead yes yes
 write cache yes yes
 flush cache yes yes
 overlap no
 Tagged Command Queuing (TCQ) no no
 Native Command Queuing (NCQ) yes 32 tags
 SMART yes yes
 microcode download yes yes
 security yes no
 power management yes yes
 advanced power management yes no 0/0x00
 automatic acoustic management yes no 254/0xFE 128/0x80
 media status notification no no
 power-up in Standby yes no
 write-read-verify no no 0/0x0
 unload no no
 free-fall no no
 data set management (TRIM) no
 
 Does this support NCQ?

Does *what* support NCQ?  The output above, despite having lost its
whitespace formatting, indicates the drive does support NCQ and due to
using CAM (via ahci.ko or siis.ko) has NCQ in use:

 Native Command Queuing (NCQ) yes 32 tags

A binary verification (does it/does it not) is also visible in your
kernel log, ex:

ada2: Command Queueing enabled

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Dan Langille

On 7/23/2010 7:42 AM, John Hawkes-Reed wrote:

Dan Langille wrote:

Thank you to all the helpful discussion. It's been very helpful and
educational. Based on the advice and suggestions, I'm going to adjust
my original plan as follows.


[ ... ]

Since I still have the medium-sized ZFS array on the bench, testing this
GPT setup seemed like a good idea.
bonnie -s 5
The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons,
3x Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.

Partitioning the drives with the command-line:
gpart add -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the following
results with bonnie-64: (Bonnie -r -s 5000|2|5)[2]


What test is this?  I just installed benchmarks/bonnie and I see no -r 
option.  Right now, I'm trying this: bonnie -s 5



--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread John Hawkes-Reed

On 24/07/2010 21:35, Dan Langille wrote:

On 7/23/2010 7:42 AM, John Hawkes-Reed wrote:

Dan Langille wrote:

Thank you to all the helpful discussion. It's been very helpful and
educational. Based on the advice and suggestions, I'm going to adjust
my original plan as follows.


[ ... ]

Since I still have the medium-sized ZFS array on the bench, testing this
GPT setup seemed like a good idea.
bonnie -s 5
The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons,
3x Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.

Partitioning the drives with the command-line:
gpart add -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the following
results with bonnie-64: (Bonnie -r -s 5000|2|5)[2]


What test is this? I just installed benchmarks/bonnie and I see no -r
option. Right now, I'm trying this: bonnie -s 5


http://code.google.com/p/bonnie-64/


--
JH-R
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


gpart -b 34 versus gpart -b 1024

2010-07-24 Thread Dan Langille
You may have seen my cunning plan: 
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=883310+0+current/freebsd-stable


I've been doing some testing today.  The first of my tests comparing 
partitions aligned on a 4KB boundary are in.  I created a 5x2TB zpool, 
each of which was set up like this:


gpart add -b 1024 -s 3906824301 -t freebsd-zfs -l disk01 ada1
or
gpart add -b   34 -s 3906824301 -t freebsd-zfs -l disk01 ada1

Repeat for all 5 HDD.  And then:

zpool create storage raidz2 gpt/disk01 gpt/disk02 gpt/disk03 gpt/disk04 
gpt/disk05


Two Bonnie-64 tests:

First, with -b 34:

# ~dan/bonnie-64-read-only/Bonnie -s 5000
File './Bonnie.12315', size: 524288
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
 5 110.6 80.5 115.3 15.1  60.9  8.5  68.8 46.2 326.7 15.3   469  1.4




And then with -b 1024

# ~dan/bonnie-64-read-only/Bonnie -s 5000
File './Bonnie.21095', size: 524288
Writing with putc()...done
Rewriting...^[[1~done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
 5 130.9 94.2 118.3 15.6  61.1  8.5  70.1 46.8 241.2 12.7   473  1.4


My reading of this:  All M/sec rates are faster except sequential input. 
 Comments?


I'll run -s 2 and -s 5 tests overnight and will post them in the 
morning.


Sunday, I'll try creating a 7x2TB array consisting of 5HDD and two 
sparse files and see how that goes. Here's hoping.


Full logs here, including a number of panics:

  http://beta.freebsddiary.org/zfs-with-gpart.php

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: gpart -b 34 versus gpart -b 1024

2010-07-24 Thread Dan Langille

On 7/24/2010 10:44 PM, Dan Langille wrote:


I'll run -s 2 and -s 5 tests overnight and will post them in the
morning.


The -s 2 results are in:

-b 34:

   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
20 114.1 82.7 110.9 14.1  62.5  8.9  73.1 48.8 153.6  9.9   195  0.9

-b 1024:

   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
20 111.0 81.2 114.7 15.1  62.6  8.9  71.9 47.9 135.3  8.7   180  1.1


Hmmm, seems like the first test was better...

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: gpart -b 34 versus gpart -b 1024

2010-07-24 Thread Dan Langille

On 7/24/2010 10:44 PM, Dan Langille wrote:

You may have seen my cunning plan:
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=883310+0+current/freebsd-stable


I've been doing some testing today. The first of my tests comparing
partitions aligned on a 4KB boundary are in. I created a 5x2TB zpool,
each of which was set up like this:

gpart add -b 1024 -s 3906824301 -t freebsd-zfs -l disk01 ada1
or
gpart add -b 34 -s 3906824301 -t freebsd-zfs -l disk01 ada1

Repeat for all 5 HDD. And then:

zpool create storage raidz2 gpt/disk01 gpt/disk02 gpt/disk03 gpt/disk04
gpt/disk05

Two Bonnie-64 tests:

First, with -b 34:

# ~dan/bonnie-64-read-only/Bonnie -s 5000
File './Bonnie.12315', size: 524288
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
---Sequential Output ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU /sec %CPU
5 110.6 80.5 115.3 15.1 60.9 8.5 68.8 46.2 326.7 15.3 469 1.4




And then with -b 1024

# ~dan/bonnie-64-read-only/Bonnie -s 5000
File './Bonnie.21095', size: 524288
Writing with putc()...done
Rewriting...^[[1~done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
---Sequential Output ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU /sec %CPU
5 130.9 94.2 118.3 15.6 61.1 8.5 70.1 46.8 241.2 12.7 473 1.4


My reading of this: All M/sec rates are faster except sequential input.
Comments?

I'll run -s 2 and -s 5 tests overnight and will post them in the
morning.


Well, it seems I'm not sleeping yet, so:

-b 34

   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
50 113.1 82.4 114.6 15.2  63.4  8.9  72.7 48.2 142.2  9.5   126  0.7


-b 1024
   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
50 110.5 81.0 112.8 15.0  62.8  9.0  72.9 48.5 139.7  9.5   144  0.9

Here, the results aren't much better either...  am I not aligning this 
partition correctly?  Missing something else?  Or... are they both 4K 
block aligned?


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: gpart -b 34 versus gpart -b 1024

2010-07-24 Thread Adam Vande More
On Sat, Jul 24, 2010 at 10:58 PM, Dan Langille d...@langille.org wrote:

   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
50 110.5 81.0 112.8 15.0  62.8  9.0  72.9 48.5 139.7  9.5   144  0.9

 Here, the results aren't much better either...  am I not aligning this
 partition correctly?  Missing something else?  Or... are they both 4K block
 aligned?


The alignment doesn't apply to all drives, just the 4k WD's and some ssd's.

If they were misaligned, you would see a large difference in the tests.  A
few points one way or other in these is largely meaningless.

That being said, if I were you I would set -b 2048(1 MB) as the default, the
amount of space wasted is trivial and your partition will always be
aligned.  People following your tutorials may have a variety of different
drives and that setting is safe for all.

Windows defaults to this offset for the same reason:

 DISKPART list partition

  Partition ###  Type  Size Offset
  -    ---  ---

  Partition 1Primary   1116 GB  1024 KB



-- 
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org