On Thu, Nov 26, 2015 at 07:43:50PM +0700, Robert Elz wrote:
>
> | LVM scans for devices and has a filter regex configured in
> | /etc/lvm/lvm.conf.
>
> OK, I think that wasn't the problem ... I tried everything I could think
> of as the filter list in there, with and without /dev/ (and with
Date:Thu, 26 Nov 2015 23:34:37 +0100
From:Michael van Elst
Message-ID: <20151126223436.ga...@serpens.de>
| This list will scan only wedges for PVs.
| filter = [ "a|rdk[0-9]*|", "r|.*|" ]
Yes, as you may have seen from a later message,
Date:Thu, 26 Nov 2015 23:58:17 +0100
From:Michael van Elst
Message-ID: <20151126225816.gb...@serpens.de>
| On Thu, Nov 26, 2015 at 06:29:30PM +0700, Robert Elz wrote:
|
| > Just try making a ccd by combining a 512 byte sector drive and a 4K
Date:Thu, 26 Nov 2015 23:59:32 + (UTC)
From:mlel...@serpens.de (Michael van Elst)
Message-ID:
| That's the responsibility of the upper layers. FFS will only do
| fragment size I/O (+ 8k I/O for the superblock).
But how does the
k...@munnari.oz.au (Robert Elz) writes:
> | But you can fake the value with pvcreate --setphysicalvolumesize.
>Will that really work? That is, if it is done that way, won't the
>"device" (the pv or whatever) appear to be a 512 byte sector "drive" ?
>If it does, what prevents i/o in 512 byte
k...@munnari.oz.au (Robert Elz) writes:
>But how does the upper layer know what it is supposed to do if the
>information has been buried?
The newfs command queries the sector size, calculates
the filesystem parameters and puts them into the superblock.
> FFS won't (or shouldn't) allow frag
On Thu, Nov 26, 2015 at 08:57:24AM +, Michael van Elst wrote:
> w...@hiwaay.net ("William A. Mahaffey III") writes:
>
> >H I thought that the RAID5 would write 1 parity byte & 4 data
> >bytes in parallel, i.e. no '1 drive bottleneck'.
>
> That only happens when the "4 data bytes"
On Thu, Nov 26, 2015 at 06:29:30PM +0700, Robert Elz wrote:
> Just try making a ccd by combining a 512 byte sector drive and a 4K
> sector drive, and watch what happens...
CCD is a very old device that isn't even configured "correctly" and
I would be very surprised if it could concat drives of
Date:Fri, 27 Nov 2015 01:15:15 + (UTC)
From:mlel...@serpens.de (Michael van Elst)
Message-ID:
| The newfs command [...]
I'm going to reply to this, but in the thread on tech-kern ...
This thread, when it started, was on a topic
w...@hiwaay.net ("William A. Mahaffey III") writes:
>H I thought that the RAID5 would write 1 parity byte & 4 data
>bytes in parallel, i.e. no '1 drive bottleneck'.
That only happens when the "4 data bytes" (actually the whole stripe)
gets written in one operation.
Unfortunately this
g...@ir.bbn.com (Greg Troxel) writes:
>Is there a clear motivation for 2048 vs 64? Are there any known
>devices where that matters?
2048 is the Windows value. It's not unlikely that software
starts to depend on it (as it did with the previous magic
value 63).
--
--
k...@munnari.oz.au (Robert Elz) writes:
>work, it just said (something like, paraphrasing, that system isn't
>running any more) "invalid device or disabled by filtering"
LVM scans for devices and has a filter regex configured in
/etc/lvm/lvm.conf.
>The kernel does some stuff right for non 512
On Thu, Nov 26, 2015 at 06:45:04AM +0700, Robert Elz wrote:
> Date:Wed, 25 Nov 2015 15:59:29 -0600
> From:Greg Oster
> Message-ID: <20151125155929.2a5f2...@mickey.usask.ca>
>
> | time dd if=/dev/zero of=/home/testfile bs=64k count=32768
> |
Date:Thu, 26 Nov 2015 09:11:22 + (UTC)
From:mlel...@serpens.de (Michael van Elst)
Message-ID:
| LVM scans for devices and has a filter regex configured in
| /etc/lvm/lvm.conf.
OK, thanks, I'll look ... the lvm doc looks like it is
Date:Thu, 26 Nov 2015 09:11:22 + (UTC)
From:mlel...@serpens.de (Michael van Elst)
Message-ID:
| LVM scans for devices and has a filter regex configured in
| /etc/lvm/lvm.conf.
For me, /etc/lvm is empty, there is no lvm.conf. If I
Swift Griggs writes:
> On Wed, 25 Nov 2015, Greg Troxel wrote:
>> We're seeing smaller disks with 4K sectors or larger flash erase
>> blocks and 512B interfaces now.
>
> Those larger erase blocks (128k?!) would seem to be a big problem if
> you'd rather stick to a
Date:Thu, 26 Nov 2015 07:57:30 -0500
From:Greg Troxel
Message-ID:
| I think 4KB is not because it's the smallest that's workable efficiency
| wise, but because there is a fragsize which is blocksize/8, and a
On Wed, 25 Nov 2015, Andreas Gustafsson wrote:
The don't have sectors as much as flash pages, and the page size varies
from device to device.
I'm curious about something, probably due to ignorance of the full
dynamics of the vfs(9) layer. Why is it that folks don't choose file
system block
On Tue, 24 Nov 2015 21:57:50 -0553.75
"William A. Mahaffey III" wrote:
> On 11/24/15 19:08, Robert Elz wrote:
> > Date:Mon, 23 Nov 2015 11:18:48 -0553.75
> > From:"William A. Mahaffey III"
> > Message-ID:
Swift Griggs writes:
> I'm curious about something, probably due to ignorance of the full
> dynamics of the vfs(9) layer. Why is it that folks don't choose file
> system block sizes and partition offsets that are least-common-factors
> that they share with the hardware
On Wed, 25 Nov 2015, Greg Troxel wrote:
So there are two issues: alignment and filesystem block/frag size, and
both have to be ok.
Ahh, a key point to be certain.
So that's ok, but alignment is messier.
It sure seems that way! :-)
We're seeing smaller disks with 4K sectors or larger
Date:Wed, 25 Nov 2015 15:59:29 -0600
From:Greg Oster
Message-ID: <20151125155929.2a5f2...@mickey.usask.ca>
| time dd if=/dev/zero of=/home/testfile bs=64k count=32768
| time dd if=/dev/zero of=/home/testfile bs=10240k count=32768
|
| so
On 11/25/15 16:05, Greg Oster wrote:
On Thu, 26 Nov 2015 04:41:02 +0700
Robert Elz wrote:
Date:Wed, 25 Nov 2015 14:57:02 -0553.75
From:"William A. Mahaffey III"
Message-ID: <56561f54.5040...@hiwaay.net>
| f:
Date:Wed, 25 Nov 2015 19:08:59 -0553.75
From:"William A. Mahaffey III"
Message-ID: <56565a61.7080...@hiwaay.net>
| The other command is still running, will write out 320 GB by my count,
| is that as intended, or a typo :-) ? If as wanted, I will
Date:Wed, 25 Nov 2015 12:29:15 -0500
From:Greg Troxel
Message-ID:
| And, there are also disks with native 4K sectors, where the interface to
| the computer transfers 4K chunks. That avoids the alignment issue,
Date:Wed, 25 Nov 2015 08:10:50 -0553.75
From:"William A. Mahaffey III"
Message-ID: <5655c020.5090...@hiwaay.net>
In addition to what I said in the previous message ...
| H I thought that the RAID5 would write 1 parity byte & 4 data
|
Date:Thu, 26 Nov 2015 01:41:00 +0700
From:Robert Elz
Message-ID: <23815.1448476...@andromeda.noi.kre.to>
| so I'd just add
|
| raidctl -a /dev/wd5f raid2
|
| in /etc/rc.local
Actually, a better way short term, is probably to put
Date:Wed, 25 Nov 2015 14:57:02 -0553.75
From:"William A. Mahaffey III"
Message-ID: <56561f54.5040...@hiwaay.net>
| f: 1886414256 67110912 RAID # (Cyl. 66578*-
| 1938020)
OK, 67110912 is a multiple of 2^11 (2048)
Date:Wed, 25 Nov 2015 13:20:14 -0700 (MST)
From:Swift Griggs
Message-ID:
| I wonder if the same is true for LVM?
No idea. I thought it should be easy enough to test, so I just
On Thu, 26 Nov 2015 04:41:02 +0700
Robert Elz wrote:
> Date:Wed, 25 Nov 2015 14:57:02 -0553.75
> From:"William A. Mahaffey III"
> Message-ID: <56561f54.5040...@hiwaay.net>
>
>
> | f: 1886414256 67110912 RAID
On 11/25/15 12:14, Robert Elz wrote:
Date:Wed, 25 Nov 2015 10:52:30 -0600
From:Greg Oster
Message-ID: <20151125105230.209c5...@mickey.usask.ca>
| Just to recap: You have a RAID set that is not 4K aligned with the
| underlying disks.
On 11/25/15 14:26, Swift Griggs wrote:
On Thu, 26 Nov 2015, Robert Elz wrote:
FFS is OK on NetBSD-7 (not sure about LFS or others, never tried
them). Raidframe might be (haven't looked) but both cgd and ccd are a
mess...
I wonder if the same is true for LVM? Since it's relatively new,
On Thu, 26 Nov 2015 01:08:21 +0700
Robert Elz wrote:
> Date:Wed, 25 Nov 2015 10:52:30 -0600
> From:Greg Oster
> Message-ID: <20151125105230.209c5...@mickey.usask.ca>
>
> | Just to recap: You have a RAID set that is not 4K
On Wed, 25 Nov 2015, William A. Mahaffey III wrote:
While LVM may have been designed by committee, I am pretty sure it was
originally an SGI committee, & seems pretty good to me as well.
As a guy who still supports ancient Unix platforms every day, I'll tell
you that IRIX categorically rocks
On 11/25/15 12:47, Robert Elz wrote:
The real reason I wanted to reply to this message is that last line.
wd5 is not being used as a spare. I kind of suspected that might be the case.
(Parts of it might be used for raid0 or raid1, that's a whole different
question and not material here).
On 11/25/15 19:36, Robert Elz wrote:
Date:Wed, 25 Nov 2015 19:08:59 -0553.75
From:"William A. Mahaffey III"
Message-ID: <56565a61.7080...@hiwaay.net>
| The other command is still running, will write out 320 GB by my count,
| is that as
Greg Troxel wrote:
> > I would go further than that. Alignment is not only an issue with 4K
> > sector disks, but also with SSDs, USB sticks, and SD cards, all of
> > which are being deployed in sizes smaller than 128 GB even today.
>
> I didn't realize that. Do these devices have 4K native
On 11/25/15 00:30, Robert Elz wrote:
Date:Tue, 24 Nov 2015 21:57:50 -0553.75
From:"William A. Mahaffey III"
Message-ID: <56553074.9060...@hiwaay.net>
| 4256EE1 # time dd if=/dev/zero of=/home/testfile bs=16k count=32768
| 32768+0 records
Robert Elz writes:
> Date:Mon, 23 Nov 2015 11:18:48 -0553.75
> From:"William A. Mahaffey III"
> Message-ID: <5653492e.1090...@hiwaay.net>
>
> Much of what you wanted to know has been answered already I think, but
> not
Greg Troxel wrote:
> The other thing would be to change the alignment threshold to 128G.
> Even that's big enough that 1M not used by default is not important.
> And of course people who care can do whatever they want anyway.
I would go further than that. Alignment is not only an issue with 4K
Andreas Gustafsson writes:
> Greg Troxel wrote:
>> The other thing would be to change the alignment threshold to 128G.
>> Even that's big enough that 1M not used by default is not important.
>> And of course people who care can do whatever they want anyway.
>
> I would go further
Date:Mon, 23 Nov 2015 11:18:48 -0553.75
From:"William A. Mahaffey III"
Message-ID: <5653492e.1090...@hiwaay.net>
Much of what you wanted to know has been answered already I think, but
not everything, so
(in a different order than they were in
On 11/24/15 19:08, Robert Elz wrote:
Date:Mon, 23 Nov 2015 11:18:48 -0553.75
From:"William A. Mahaffey III"
Message-ID: <5653492e.1090...@hiwaay.net>
Much of what you wanted to know has been answered already I think, but
not everything, so
Date:Tue, 24 Nov 2015 21:57:50 -0553.75
From:"William A. Mahaffey III"
Message-ID: <56553074.9060...@hiwaay.net>
| 4256EE1 # time dd if=/dev/zero of=/home/testfile bs=16k count=32768
| 32768+0 records in
| 32768+0 records out
| 536870912
Swift Griggs writes:
> On Tue, 24 Nov 2015, Felix Deichmann wrote:
>> You can probably save a lot of time, tries and headache by doing a
>> fresh installation of your server with correct alignment and block
>> sizes...
>
> Is there a procedure online somewhere for
On Tue, 24 Nov 2015, Felix Deichmann wrote:
You can probably save a lot of time, tries and headache by doing a fresh
installation of your server with correct alignment and block sizes...
Is there a procedure online somewhere for verifying that you have properly
aligned your file system on a
Am 24.11.2015 um 15:32 schrieb William A. Mahaffey III:
Physical sector size: 4096 bytes
As Manuel already stated, you have disks with 4K sectors.
Could you expand a bit further on those tools :-) ?
All I know is from reading about it, it's called "Acronis
On Tue, Nov 24, 2015 at 08:39:19AM -0553, William A. Mahaffey III wrote:
> [...]
> Physical sector size: 4096 bytes
So yes you should align your partitions to 4k (both on individual drives
and at the RAID level), also also use ffs block size multiple of 4k.
--
On 11/24/15 08:47, Manuel Bouyer wrote:
On Tue, Nov 24, 2015 at 08:39:19AM -0553, William A. Mahaffey III wrote:
[...]
Physical sector size: 4096 bytes
So yes you should align your partitions to 4k (both on individual drives
and at the RAID level), also also use
On 11/24/15 00:01, Felix Deichmann wrote:
2015-11-23 18:11 GMT+01:00 William A. Mahaffey III :
[his alignment]
Partitions aligned to 2048 sector boundaries, offset 2048 <-- *DING DING
DING*
[your alignment]
Partitions aligned to 16065 sector boundaries, offset 63
On 11/23/15 11:36, Stephen Borrill wrote:
On Mon, 23 Nov 2015, William A. Mahaffey III wrote:
I have a small server running NetBSD 6.1.5, setup last summer with a
combination of help onlist (*THANKS*) & an online tutorial I found
here:
2015-11-23 18:11 GMT+01:00 William A. Mahaffey III :
> [his alignment]
>
> Partitions aligned to 2048 sector boundaries, offset 2048 <-- *DING DING
> DING*
>
> [your alignment]
>
> Partitions aligned to 16065 sector boundaries, offset 63 <- *DING DING
> DING*
My
I have a small server running NetBSD 6.1.5, setup last summer with a
combination of help onlist (*THANKS*) & an online tutorial I found here:
http://abs0d.blogspot.com/2011/08/setting-up-8tb-netbsd-file-server.html
(watch for line wrap). He setup a 5-HDD box, I used 6 HDD's, he used 2TB
On Mon, 23 Nov 2015, William A. Mahaffey III wrote:
I have a small server running NetBSD 6.1.5, setup last summer with a
combination of help onlist (*THANKS*) & an online tutorial I found here:
http://abs0d.blogspot.com/2011/08/setting-up-8tb-netbsd-file-server.html
(watch for line wrap). He
54 matches
Mail list logo