Re: proper newfs options for SSD disk

2012-05-23 Thread Tim Kientzle

On May 22, 2012, at 7:40 AM, Warren Block wrote:

> On Tue, 22 May 2012, Matthias Apitz wrote:
> 
>> El día Tuesday, May 22, 2012 a las 07:42:18AM -0600, Warren Block escribió:
>> 
>>> On Tue, 22 May 2012, Matthias Apitz wrote:
>>> 
 El día Sunday, May 20, 2012 a las 03:36:01AM +0900, rozhuk...@gmail.com 
 escribió:
> 
> Do not use MBR (or manually do all to align).
> 63 - not 4k aligned.
 
 To create the above shown partition layout I have not used gpart(8); I
 just said:
 
   # fdisk -I /dev/ada0
   # fdisk -B /dev/ada0
 
 ...
 What is wrong with this procedure?
>>> 
>>> The filesystem partitions end up at locations that aren't even multiples
>>> of 4K.  This can reduce performance.  How much probably depends on the
>>> SSD.
>> 
>> But this is then rather a bug in fdisk(8) and not a PEBKAC, or? :-)
> 
> A bug in the design of MBR.  Which probably can be forgiven, considering when 
> it was created and the other problems with it. :)
> 
> gpart's alignment option can be used with MBR slices and bsdlabel partitions.

GPart's alignment option doesn't work for MBR slices.
It rounds to the requested alignment, and then rounds again
to the track size, which defaults to 63 sectors.

I'm not convinced this is a bug in the design of MBR.  I don't
think anything in the MBR design requires that partitions
be track-aligned.

Tim

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Please help me diagnose this crazy VMWare/FreeBSD 8.x crash

2012-05-23 Thread Adrian Chadd
Hi,

can you please, -please- file a PR? And place all of the above
information in it so we don't lose it?

If this is indeed the problem then I really think we should root cause
why the driver and/or interrupt handling code is getting angry with
the shared interrupt.

I'd also appreciate it if you and the other people who can reproduce
this could work with the em/mpt driver people and root cause why this
is going. I think having FreeBSD on vmware work stable out of the box
without these kinds of tweaks is the way to go - who knows what else
is lurking here..

I'm very very glad you've persisted with this and if I had them, I'd
send you a "FreeBSD persistent bug reporter!" t-shirt.

Thanks,


Adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: ARM + CACHE_LINE_SIZE + DMA

2012-05-23 Thread Warner Losh
Hi Svatopluk,

That looks very interesting.

You may be interested in the efforts of various people to bring up the armv6 
multi-core boards.

You can checkout the source from http://svn.freebsd.org/base/projects/armv6 to 
see where we are in that effort.  I believe that many of these issues have been 
addressed.  Perhaps you could take a look and contribute to any areas that are 
incomplete rather than starting from scratch?

Hope you are doing well!  We need more people that truly understand the ARM 
cache issues.

Warner


On May 23, 2012, at 7:13 AM, Svatopluk Kraus wrote:

> Hi,
> 
> with respect to your replies and among other things, the following
> summary could be made:
> 
> There are three kinds of DMA buffers according to their origin:
> 
> 1. driver buffers
> As Alexander wrote, the buffers should be allocated by
> bus_dmamap_alloc(). The function should be implemented to allocate the
> buffers correctly aligned with help of bus_dma_tag_t. For these
> buffers, we can avoid bouncing totally just by correct driver
> implementation. For badly implemented drivers, bouncing penalty is
> paid in case of unaligned buffers. For BUS_DMA_COHERENT allocations,
> as Mark wrote, an allocation pool of coherent pages is good
> optimalization.
> 
> 2. well-known system buffers
> Mbufs and vfs buffers. The buffers should be aligned on
> CACHE_LINE_SIZE (start and size).
> It should be enough for vfs buffers as they are carring data only and
> only whole buffers should be accessed by DMA. The mbuf is a structure
> and data can be carried on three possible locations. The first one,
> the external buffer, should be aligned on CACHE_LINE_SIZE. The next
> two locations, which are parts of the mbuf structure, could be
> unaligned in general. If we assume that no one else is writing any
> part of the mbuf during DMA access, we can set BUS_DMA_UNALIGNED_SAFE
> flag in mbuf load functions. I.e., we don't bounce unaligned buffers
> if the flag is set in dmamap. A tunable can be implemented to suppres
> the flag for debugging purposes.
> 
> 3. other buffers
> As we know nothing about these buffers, we must always bounce unaligned ones.
> 
> Just two more notes. The DMA buffer should not be access by anyone
> (except DMA itself) after PRESYNC and before POSTSYNC. For DMA
> descriptors (for example), using bus_dmamap_alloc() with
> BUS_DMA_COHERENT flag could be inevitable.
> 
> As I'm implementing bus dma for ARM11mpcore, I'm doing it with next 
> assumptions:
> 1. ARMv6k and higher
> 2. PIPT data cache
> 3. SMP ready
> 
> Svata
> ___
> freebsd-...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-arm
> To unsubscribe, send any mail to "freebsd-arm-unsubscr...@freebsd.org"
> 
> 

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: ARM + CACHE_LINE_SIZE + DMA

2012-05-23 Thread Svatopluk Kraus
Hi,

with respect to your replies and among other things, the following
summary could be made:

There are three kinds of DMA buffers according to their origin:

1. driver buffers
As Alexander wrote, the buffers should be allocated by
bus_dmamap_alloc(). The function should be implemented to allocate the
buffers correctly aligned with help of bus_dma_tag_t. For these
buffers, we can avoid bouncing totally just by correct driver
implementation. For badly implemented drivers, bouncing penalty is
paid in case of unaligned buffers. For BUS_DMA_COHERENT allocations,
as Mark wrote, an allocation pool of coherent pages is good
optimalization.

2. well-known system buffers
Mbufs and vfs buffers. The buffers should be aligned on
CACHE_LINE_SIZE (start and size).
It should be enough for vfs buffers as they are carring data only and
only whole buffers should be accessed by DMA. The mbuf is a structure
and data can be carried on three possible locations. The first one,
the external buffer, should be aligned on CACHE_LINE_SIZE. The next
two locations, which are parts of the mbuf structure, could be
unaligned in general. If we assume that no one else is writing any
part of the mbuf during DMA access, we can set BUS_DMA_UNALIGNED_SAFE
flag in mbuf load functions. I.e., we don't bounce unaligned buffers
if the flag is set in dmamap. A tunable can be implemented to suppres
the flag for debugging purposes.

3. other buffers
As we know nothing about these buffers, we must always bounce unaligned ones.

Just two more notes. The DMA buffer should not be access by anyone
(except DMA itself) after PRESYNC and before POSTSYNC. For DMA
descriptors (for example), using bus_dmamap_alloc() with
BUS_DMA_COHERENT flag could be inevitable.

As I'm implementing bus dma for ARM11mpcore, I'm doing it with next assumptions:
1. ARMv6k and higher
2. PIPT data cache
3. SMP ready

Svata
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: ARM + CACHE_LINE_SIZE + DMA

2012-05-23 Thread Svatopluk Kraus
On Mon, May 21, 2012 at 6:20 PM, Ian Lepore
 wrote:
>> ...
>> Some more notes.
>>
>> SMP makes things worse and ARM11mpcore is about SMP too. For example,
>> another thread could be open about that how to flush caches (exclusive
>> L1 cache) in SMP case.
>>
>> I'm not sure how to correctly change memory attributes on page which
>> is in use. Making new temporary mapping with different attributes is
>> wrong and does not help at all. It's question how to do TLB and cache
>> flushes on two and more processors and be sure that everything is OK.
>> It could be slow and maybe, changing memory attributes on the fly is
>> not a good idea at all.
>>
>
> My suggestion of making a temporary writable mapping was the answer to
> how to correctly change memory attributes on a page which is in use, at
> least in the existing code, which is for a single processor.
>
> You don't need, and won't even use, the temporary mapping.  You would
> make it just because doing so invokes logic in arm/arm/pmap.c which will
> find all existing virtual mappings of the given physical pages, and
> disable caching in each of those existing mappings.  In effect, it makes
> all existing mappings of the physical pages DMA_COHERENT.  When you
> later free the temporary mapping, all other existing mappings are
> changed back to being cacheable (as long as no more than one of the
> mappings that remain is writable).
>
> I don't know that making a temporary mapping just for its side effect of
> changing other existing mappings is a good idea, it's just a quick and
> easy thing to do if you want to try changing all existing mappings to
> non-cacheable.  It could be that a better way would be to have the
> busdma_machdep code call directly to lower-level routines in pmap.c to
> change existing mappings without making a new temporary mapping in the
> kernel pmap.  The actual changes to the existing mappings are made by
> pmap_fix_cache() but that routine isn't directly callable right now.
>

Thanks for explanation. In fact, I known only a little about current
ARM pmap implementation in FreeBSD tree. I took i386 pmap
implementation and modified it according to arm11mpcore.

> Also, as far as I know all of this automatic disabling of cache for
> multiple writable mappings applies only to VIVT cache architectures.
> I'm not sure how the pmap code is going to change to support VIPT and
> PIPT caches, but it may no longer be true that making a second writable
> mapping of a page will lead to changing all existing mappings to
> non-cacheable.
>
> -- Ian


Svata
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"