Re: Thunderbolt(/USB4) followup & I'm happy to donate some hardware Re: Feature request: Use the PCIe devices on Thunderbolt (aka PCIe hotplug?)

2020-11-20 Thread Joseph Mayer
Kind bump on this thread.

As for me I'd like to attach nvme(4) and maybe ethernet and amdgpu(4)
to the Thunderbolt-as-PCIe-bridge.

Have a good wknd! Joseph

‐‐‐ Original Message ‐‐‐
On Monday, 26 October 2020 13:02, Joseph Mayer  
wrote:

> (If this one belongs on misc@ please say.)
>
> Hi tech@,
>
> If anyone is interested in implementing Thunderbolt support for
> OpenBSD, I'd like to donate some PCIe expansion Thunderbolt 3 enclosure
> and M.2 NVMe SSD Thunderbolt 3 enclosure as appropriate, if so please
> let me know.
>
> BSDCan 2020 presentation by Scott Long of FreeBSD Thunderbolt support
> here: https://youtu.be/VbAJf2PBE-M?t=802
> (https://www.bsdcan.org/events/bsdcan_2020/schedule/session/27-thunderbolt-on-freebsd/).
>  He mentions there that the sources are in
> "rc/sys/dev/thunderbolt" but they appear to not have been merged yet.
>
> Thunderbolt in essence is a hotplugged PCIv3 x4 interface, useful when
> a machine especially a laptop lacks other ways to plug in SSD, NIC,
> AMDGPU. Not sure how clean the licensing situation is and how bloated
> it is. (Note USB4 and Thunderbolt 4 are Thunderbolt 3 but with PCIe
> data increased from 22gbps to 32gbps.)
>
> Apparently Thunderbolt is incorporated in the USB4 spec and this way
> will be more ubiquitous and come to more architectures, ref.
> https://www.phoronix.com/scan.php?page=news_item=Arm-Thunderbolt-Works , 
> https://lwn.net/Articles/802961/ .
>
> Within Linux there's seemingly unending amounts of patches and more:
> https://github.com/torvalds/linux/tree/master/drivers/thunderbolt ,
> Intel devs unhelpful https://lore.kernel.org/patchwork/patch/983864/ ,
> https://lwn.net/Search/DoSearch?words=thunderbolt , search "thunderbolt
> site:lkml.iu.edu/hypermail/linux/kernel/".
>
> Joseph
>
> ‐‐‐ Original Message ‐‐‐
> On Tuesday, 24 March 2020 01:45, John-Mark Gurney j...@funkthat.com wrote:
>
> > Joseph Mayer wrote this message on Sat, Mar 21, 2020 at 02:57 +:
> >
> > > Thunderbolt support would be awesome. Especially it would allow the use
> > > of additional M.2 NVMe SSD:s on a laptop at full performance.
> > > Thunderbolt support would also allow the use of an AMDGPU via a PCIe
> > > chassi, as well as enable the use of 10gbps Ethernet on laptops [1].
> > > While I like to use Thunderbolt for this pragmatic reason, also Intel
> > > apparently promises license etc. generosity to computer makers, which
> > > certainly does not hurt. [2]
> > > FreeBSD has Thunderbolt support. It appears to me that they call it
> > > "PCIe Hot plug". [3]
> >
> > From my understanding, Thunderbolt is different from PCIe Hot Plug...
> > PCIe the spec itself has hot plug capabilities, and this is what is
> > used for laptops w/ ExpressCards and some servers...
> > Thunderbolt from my understanding is more complicated due to
> > display routing and other related features and FreeBSD does NOT
> > yet have support for it.
> >
> > > It was implemented 2015 by John-Mark Gurney j...@freebsd.org.
> >
> > John Baldwin,j...@freebsd.org ended up implementing it differently
> > and not using the code I had written, so he is probably a better
> > person to ask on the current state of the code..
> > This was done via:
> > https://reviews.freebsd.org/D6136?id=15683
> > I have heard that there may be a proper ThunderBolt support coming
> > to FreeBSD in the near future, but not sure exactly when...
> >
> > > Not sure if a TB device must be attached on boot and cannot be
> > > detached, anyhow if that is the case then still totally fine.
> >
> > The devctl command can detach a device. This allows ejecting
> > devices w/o crashing the system for removal, or allowing you to detach
> > a device and pass it through to a bhyve vm, etc. Not all drivers are
> > written to allow detaching...
> >
> > > NetBSD appears to have support also but I don't find details.
> > > Security-wise Thunderbolt without IOMMU is correlated with physical
> > > break-in attack vectors, anyhow that is commonly fine. [4]
> >
> > From my understanding, all PCIe switches have a built in IOMMU, so
> > this shouldn't be a major security issue. I have not done indepth
> > analysis to verify this though. and this also depends upon the
> > PCIe switch not having bugs...
> > There is a relatively inexpensive USB3 to PCIe bridge that lets you
> > issue arbitrary PCIe commands that could be used to verify the security
> > of implementations...
> >
> > > One Thunderbolt 3 controller provides 22gbps of PCIe data bandwi

Re: Thunderbolt(/USB4) followup & I'm happy to donate some hardware Re: Feature request: Use the PCIe devices on Thunderbolt (aka PCIe hotplug?)

2020-10-26 Thread Joseph Mayer
Hi Tom,

I share your understanding that Thunderbolt has a lower security
profile due to TB having dynamic memory addresses access while USB does
not. I presume the engineering idea is that the IOMMU (when enabled and
properly configured) should uphold memory safety.

Remember though you have this risk for internal devices already e.g.
your PCIe device's firmware might try to mess with you, whether it's in
a PCIe slot in your computer or in an external TB3 enclosure.

What I wanted to bring attention to with this thread was to request TB3 support 
and say I'm happy to donate some enclosures.

Best regards,
Joseph

‐‐‐ Original Message ‐‐‐
On Monday, 26 October 2020 15:36, Tom Smyth  
wrote:

> Hi Joseph,All
>
> There are some PCI-E attack surfaces that might need to be considered...
> perhaps the availability of more devices with thunderbolt connections make
> PCI-E / DMA Attacks more viable and hence more prevalent.
>
> https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-934.pdf
> I did come across intel SGX when configuring the bios / firmware
> on my lenovo laptop which mentioned Thunderbolt / PCI-E attacks.
>
> But mitigating this risk could yield security benefits for people who
> use PCI-E pass
> through / SR-IOV in Virtualized environments.
>
> I hope this helps,
>
> Tom Smyth
>
> On Mon, 26 Oct 2020 at 12:06, Joseph Mayer joseph.ma...@protonmail.com wrote:
>
> > (If this one belongs on misc@ please say.)
> > Hi tech@,
> > If anyone is interested in implementing Thunderbolt support for
> > OpenBSD, I'd like to donate some PCIe expansion Thunderbolt 3 enclosure
> > and M.2 NVMe SSD Thunderbolt 3 enclosure as appropriate, if so please
> > let me know.
> > BSDCan 2020 presentation by Scott Long of FreeBSD Thunderbolt support
> > here: https://youtu.be/VbAJf2PBE-M?t=802
> > (https://www.bsdcan.org/events/bsdcan_2020/schedule/session/27-thunderbolt-on-freebsd/).
> >  He mentions there that the sources are in
> > "rc/sys/dev/thunderbolt" but they appear to not have been merged yet.
> > Thunderbolt in essence is a hotplugged PCIv3 x4 interface, useful when
> > a machine especially a laptop lacks other ways to plug in SSD, NIC,
> > AMDGPU. Not sure how clean the licensing situation is and how bloated
> > it is. (Note USB4 and Thunderbolt 4 are Thunderbolt 3 but with PCIe
> > data increased from 22gbps to 32gbps.)
> > Apparently Thunderbolt is incorporated in the USB4 spec and this way
> > will be more ubiquitous and come to more architectures, ref.
> > https://www.phoronix.com/scan.php?page=news_item=Arm-Thunderbolt-Works , 
> > https://lwn.net/Articles/802961/ .
> > Within Linux there's seemingly unending amounts of patches and more:
> > https://github.com/torvalds/linux/tree/master/drivers/thunderbolt ,
> > Intel devs unhelpful https://lore.kernel.org/patchwork/patch/983864/ ,
> > https://lwn.net/Search/DoSearch?words=thunderbolt , search "thunderbolt
> > site:lkml.iu.edu/hypermail/linux/kernel/".
> > Joseph
> > ‐‐‐ Original Message ‐‐‐
> > On Tuesday, 24 March 2020 01:45, John-Mark Gurney j...@funkthat.com wrote:
> >
> > > Joseph Mayer wrote this message on Sat, Mar 21, 2020 at 02:57 +:
> > >
> > > > Thunderbolt support would be awesome. Especially it would allow the use
> > > > of additional M.2 NVMe SSD:s on a laptop at full performance.
> > > > Thunderbolt support would also allow the use of an AMDGPU via a PCIe
> > > > chassi, as well as enable the use of 10gbps Ethernet on laptops [1].
> > > > While I like to use Thunderbolt for this pragmatic reason, also Intel
> > > > apparently promises license etc. generosity to computer makers, which
> > > > certainly does not hurt. [2]
> > > > FreeBSD has Thunderbolt support. It appears to me that they call it
> > > > "PCIe Hot plug". [3]
> > >
> > > From my understanding, Thunderbolt is different from PCIe Hot Plug...
> > > PCIe the spec itself has hot plug capabilities, and this is what is
> > > used for laptops w/ ExpressCards and some servers...
> > > Thunderbolt from my understanding is more complicated due to
> > > display routing and other related features and FreeBSD does NOT
> > > yet have support for it.
> > >
> > > > It was implemented 2015 by John-Mark Gurney j...@freebsd.org.
> > >
> > > John Baldwin,j...@freebsd.org ended up implementing it differently
> > > and not using the code I had written, so he is probably a better
> > > person to ask on the current state of the code..
> > > This was done via:
> > > http

Thunderbolt(/USB4) followup & I'm happy to donate some hardware Re: Feature request: Use the PCIe devices on Thunderbolt (aka PCIe hotplug?)

2020-10-26 Thread Joseph Mayer
(If this one belongs on misc@ please say.)

Hi tech@,

If anyone is interested in implementing Thunderbolt support for
OpenBSD, I'd like to donate some PCIe expansion Thunderbolt 3 enclosure
and M.2 NVMe SSD Thunderbolt 3 enclosure as appropriate, if so please
let me know.

BSDCan 2020 presentation by Scott Long of FreeBSD Thunderbolt support
here: https://youtu.be/VbAJf2PBE-M?t=802
(https://www.bsdcan.org/events/bsdcan_2020/schedule/session/27-thunderbolt-on-freebsd/).
 He mentions there that the sources are in
"rc/sys/dev/thunderbolt" but they appear to not have been merged yet.

Thunderbolt in essence is a hotplugged PCIv3 x4 interface, useful when
a machine especially a laptop lacks other ways to plug in SSD, NIC,
AMDGPU. Not sure how clean the licensing situation is and how bloated
it is. (Note USB4 and Thunderbolt 4 are Thunderbolt 3 but with PCIe
data increased from 22gbps to 32gbps.)

Apparently Thunderbolt is incorporated in the USB4 spec and this way
will be more ubiquitous and come to more architectures, ref.
https://www.phoronix.com/scan.php?page=news_item=Arm-Thunderbolt-Works , 
https://lwn.net/Articles/802961/ .

Within Linux there's seemingly unending amounts of patches and more:
https://github.com/torvalds/linux/tree/master/drivers/thunderbolt ,
Intel devs unhelpful https://lore.kernel.org/patchwork/patch/983864/ ,
https://lwn.net/Search/DoSearch?words=thunderbolt , search "thunderbolt
site:lkml.iu.edu/hypermail/linux/kernel/".

Joseph

‐‐‐ Original Message ‐‐‐
On Tuesday, 24 March 2020 01:45, John-Mark Gurney  wrote:

> Joseph Mayer wrote this message on Sat, Mar 21, 2020 at 02:57 +:
>
> > Thunderbolt support would be awesome. Especially it would allow the use
> > of additional M.2 NVMe SSD:s on a laptop at full performance.
> > Thunderbolt support would also allow the use of an AMDGPU via a PCIe
> > chassi, as well as enable the use of 10gbps Ethernet on laptops [1].
> > While I like to use Thunderbolt for this pragmatic reason, also Intel
> > apparently promises license etc. generosity to computer makers, which
> > certainly does not hurt. [2]
> > FreeBSD has Thunderbolt support. It appears to me that they call it
> > "PCIe Hot plug". [3]
>
> From my understanding, Thunderbolt is different from PCIe Hot Plug...
>
> PCIe the spec itself has hot plug capabilities, and this is what is
> used for laptops w/ ExpressCards and some servers...
>
> Thunderbolt from my understanding is more complicated due to
> display routing and other related features and FreeBSD does NOT
> yet have support for it.
>
> > It was implemented 2015 by John-Mark Gurney j...@freebsd.org.
>
> John Baldwin,j...@freebsd.org ended up implementing it differently
> and not using the code I had written, so he is probably a better
> person to ask on the current state of the code..
>
> This was done via:
> https://reviews.freebsd.org/D6136?id=15683
>
> I have heard that there may be a proper ThunderBolt support coming
> to FreeBSD in the near future, but not sure exactly when...
>
> > Not sure if a TB device must be attached on boot and cannot be
> > detached, anyhow if that is the case then still totally fine.
>
> The devctl command can detach a device. This allows ejecting
> devices w/o crashing the system for removal, or allowing you to detach
> a device and pass it through to a bhyve vm, etc. Not all drivers are
> written to allow detaching...
>
> > NetBSD appears to have support also but I don't find details.
> > Security-wise Thunderbolt without IOMMU is correlated with physical
> > break-in attack vectors, anyhow that is commonly fine. [4]
>
> From my understanding, all PCIe switches have a built in IOMMU, so
> this shouldn't be a major security issue. I have not done indepth
> analysis to verify this though. and this also depends upon the
> PCIe switch not having bugs...
>
> There is a relatively inexpensive USB3 to PCIe bridge that lets you
> issue arbitrary PCIe commands that could be used to verify the security
> of implementations...
>
> > One Thunderbolt 3 controller provides 22gbps of PCIe data bandwidth to
> > all the one or two Thunderbolt ports it exports, which is fine. [5]
> > Many Thunderbolt devices allow daisy chaining. An "eGFX" certified [6]
> > Thunderbolt PCIe chassi (such as [7]) has absolutely no performance
> > advantage over a normal Thunderbolt PCIe chassi (such as [8]),
> > including for eGPU (e.g. AMDGPU) use.
>
> Good luck!
>
> > [1] The lowest cost and most common 10gbps Ethernet Thunderbolt chip
> > is Aquantia AQC107S. There are also some adapters based on a normal
> > PCIe 10gbps chip and a separate Thunderbolt to PCIe controller.
> > [2] https://www.theregister

Feature request: Use the PCIe devices on Thunderbolt (aka PCIe hotplug?)

2020-03-20 Thread Joseph Mayer
(Maybe to be moved to misc@)

Dear OpenBSD tech@,

Thunderbolt support would be awesome. Especially it would allow the use
of additional M.2 NVMe SSD:s on a laptop at full performance.

Thunderbolt support would also allow the use of an AMDGPU via a PCIe
chassi, as well as enable the use of 10gbps Ethernet on laptops [1].


While I like to use Thunderbolt for this pragmatic reason, also Intel
apparently promises license etc. generosity to computer makers, which
certainly does not hurt. [2]


FreeBSD has Thunderbolt support. It appears to me that they call it
"PCIe Hot plug". [3]

It was implemented 2015 by John-Mark Gurney .

Not sure if a TB device must be attached on boot and cannot be
detached, anyhow if that is the case then still totally fine.

NetBSD appears to have support also but I don't find details.


Security-wise Thunderbolt without IOMMU is correlated with physical
break-in attack vectors, anyhow that is commonly fine. [4]

One Thunderbolt 3 controller provides 22gbps of PCIe data bandwidth to
all the one or two Thunderbolt ports it exports, which is fine. [5]
Many Thunderbolt devices allow daisy chaining. An "eGFX" certified [6]
Thunderbolt PCIe chassi (such as [7]) has absolutely no performance
advantage over a normal Thunderbolt PCIe chassi (such as [8]),
including for eGPU (e.g. AMDGPU) use.

Joseph

[1] The lowest cost and most common 10gbps Ethernet Thunderbolt chip
is Aquantia AQC107S. There are also some adapters based on a normal
PCIe 10gbps chip and a separate Thunderbolt to PCIe controller.

[2] https://www.theregister.co.uk/2017/05/24/intel_thunderbolt_3forall/

[3] 
https://www.freebsd.org/news/status/report-2015-01-2015-03.html#Adding-PCIe-Hot-plug-Support
https://www.freebsd.org/news/status/report-2015-07-2015-09.html#Adding-PCIe-Hot-plug-Support

[4] 
https://www.osnews.com/story/129501/thunderbolt-enables-severe-security-threats/

[5] And not 40gbps as common marketing makes it sound like.

[6] https://thunderbolttechnology.net/egfx
https://thunderbolttechnology.net/blog/the-difference-between-egfx-and-egpu
= marketing mumbo jumbo.

[7] https://www.asus.com/Graphics-Cards-Accessories/XG-STATION-PRO/

[8] https://www.akitio.com/expansion/node-pro



Re: OpenBSD on IBM Power.

2019-11-11 Thread Joseph Mayer
Hi Ben,

To my best awareness, Power9 support is underway. No idea about date.
Maybe usable in 4-12mo?

Joseph

On Wednesday, 6 November 2019 12:49, Ben Crowhurst 
 wrote:

> I've seen a few threads discussing OpenBSD on IBM Power Systems.
> http://openbsd-archive.7691.n7.nabble.com/What-about-the-IBM-POWER7-and-POWER8-platforms-did-anyone-ever-think-about-porting-these-to-OpenBSD-td290583.html
>
> Does anyone have an update/status report on the progress?
>
> Regards,
> Ben Crowhurst | Corvusoft
> www.corvusoft.co.uk




Re: Thermal zone support for arm64

2019-06-30 Thread Joseph Mayer
On Saturday, 29 June 2019 18:08, Mark Kettenis  wrote:
> Many of the cheap arm64 (and armv7) boards will overheat if you run
> the CPU cores at full throttle for a while. Adding a heatsink may
> help a little bit, but not enough. Some boards have a microcontroller
> that monitors the temperature and throttles the CPUs if necessary.
> Other boards don't and will eventually hit a critical temperature
> where it will either do an emergency powerdown or will start to become
> unreliable.

Hi Mark,

Great.

With this diff SoC performance and temperature are subjected to the
logic that highest prio is stay at <70C and second prio is subject to
first prio being satified, operate at full/fullest possible
performance, right?

> the temperature gets too high. There are device tree bindings for
> so-called thermal zones that link together temperature sensors and
> cooling devices and define trip points that define the temperatures at
> which we have to start cooling. Most boards use passive cooling

Are the trip points default-config info stored in the hardware?

> +  * If the current tenperature is above the trip temperature:
> +  * If the current temperature is below the trip tenmperature:
>+   *  - decreate the cooling level if the temperature is falling

Small typo should te*ure->temperature & decreate -> decrease.

Thanks!
Joseph



Man page/doc/slides/books/specs re OpenBSD memory model or memory order?

2019-01-31 Thread Joseph Mayer
Hi,

This is not to suggest that the following would be relevant or needed:

Is there any any man page, or documents, slides, books or standards
specification documents regarding OpenBSD's memory model or memory
order considerations?

A marc.info misc@ or tech@ or Google search for query gives no results.

C and C++ got memory models as of C11 and C++11, ref.
https://en.wikipedia.org/wiki/Memory_model_(programming),
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1548.pdf section 7.17.

For another system there is a discussion at
https://www.kernel.org/doc/Documentation/memory-barriers.txt .

Thanks,
Joseph

(Feel free to move to misc@ )



AMD64 buffer cache 4GB cap anything new, multiqueueing plans? ("64bit DMA on amd64" cont)

2018-11-06 Thread Joseph Mayer
Hi,

Previously there was a years-long thread about a 4GB (32bit) buffer
cache constraint on AMD64, ref
https://marc.info/?t=14682443664=1=2 .

What I gather is,

 * The problematique is that on AMD64, DMA is limited to 32bit
   addressing, I guess because unlike AMD64 arch CPU:s which all have
   64bit DMA support, popular PCI accessories and supporting hardware
   out there like bridges, have DMA functionality limited to 32bit
   addressing.

   (Is this a feature of lower-quality hardware, or for very old PCI
   devices, or is it systemic to the whole AMD64 ecosystem today?

   Could a system be configured to use 64bit DMA on AMD64 and be
   expected to work presuming recent or higher-quality / well-selected
   hardware?)

 * The OS asks the disk hardware to load disk data to give memory
   locations via DMA, and then userland fread() and mmap() is fed with
   that data - no need for further data moving or mapping. This is the
   dynamics leading to the 4GB cap.

   And, the 4GB cap is kind of constraining for any computer with much
   RAM and lots of disk reading, as it means lots of reads that
   wouldn't need to hit the disk (as it could be cached using all this
   free memory) isn't cached and is directed to disk anyhow which takes
   a lot of time, yes?

 * This was recognized a long time ago and Bob wrote a solution in
   the form of a "buffer cache flipper" that would push buffer cache
   data out of the 32bit area (to "high memory" as in >32bit) hence
   lifting the limit, via a "(generic) backpressure" mechanism that as
   a bonus used the DMA engine to do the memory moving, I guess this
   means that the buffer cache would be pretty much zero-cost to the
   CPU - sounds incredibly neat!

   And then, it didn't really work, malfunctioned and irritated people
   (was "busted" - for unknown reasons, actually why was it?) and Theo
   wrote it will be fixed in the future.


Has it been fixed since?


Also - when fixed, fread() and mmap() reads to data that's in the
buffer cache will be incredibly fast right, as, in optimal conditions
the mmap:ed addresses will be already-mapped to the buffer cache data
and hence in optimal conditions mmap:ed buffer cache data reads will
have the speed of any memory access, right?


(The ML thread also mentioned an undeadly.org post discussing this
topic, however both searching and browsing I can't find it, the closest
i find is 5 words here
https://undeadly.org/cgi?action=article;sid=20170815171854 - do you
have any URL?)


Last, OpenBSD's biggest limit as an OS seems to be that the disk/file
subsystem is sequential. A modern SSD can read at 2.8GB/sec but that
requires parallellism, without multiqueueing and with small reads e.g.
4KB or smaller, speeds stay around 70-120MB/sec = ~3.5% of the
hardware's potential performance. This would be really worthy goal to
donate to for instance, in particular as OpenBSD leads the way in many
other areas.

Are there any thoughts about implementing this in the future?

Thanks,
Joseph



On ARM64, does OpenBSD have a 32/4GB cap on the buffer cache like on AMD64?

2018-11-06 Thread Joseph Mayer
Hi,

On ARM64, does OpenBSD have a 4GB cap on the buffer cache like on
AMD64?

Thanks,
Joseph



Re: bypass support for iommu on sparc64

2018-10-19 Thread Joseph Mayer
On Saturday, October 20, 2018 10:14 AM, David Gwynne  wrote:

> > On 20 Oct 2018, at 11:56 am, Joseph Mayer joseph.ma...@protonmail.com wrote:
> > ‐‐‐ Original Message ‐‐‐
> > On Friday, October 19, 2018 5:15 PM, Mark Kettenis mark.kette...@xs4all.nl 
> > wrote:
> >
> > > > Date: Fri, 19 Oct 2018 10:22:30 +1000
> > > > From: David Gwynne da...@gwynne.id.au
> > > > On Wed, May 10, 2017 at 10:09:59PM +1000, David Gwynne wrote:
> > > >
> > > > > On Mon, May 08, 2017 at 11:03:58AM +1000, David Gwynne wrote:
> > > > >
> > > > > > on modern sparc64s (think fire or sparc enterprise Mx000 boxes),
> > > > > > setting up and tearing down the translation table entries (TTEs)
> > > > > > is very expensive. so expensive that the cost of doing it for disk
> > > > > > io has a noticable impact on compile times.
> > > > > > now that there's a BUS_DMA_64BIT flag, we can use that to decide
> > > > > > to bypass the iommu for devices that set that flag, therefore
> > > > > > avoiding the cost of handling the TTEs.
> >
> > Question for the unintroduced, what's the scope here, TTE is Sparc's
> > page table and reconfiguring them at (process) context switch is
> > expensive and this suggestion removes the need for TTE:s for hardware
> > device access, but those don't change at context switch?
>
> We're talking about an IOMMU here, not a traditional MMU providing virtual 
> addresses for programs. An IOMMU sits between physical memory and the devices 
> in a machine. It allows DMA addresses to mapped to different parts of 
> physical memory. Mapping physical memory to a DMA virtual address (or dva) is 
> how a device that only understands 32bit addresses can work in a 64bit 
> machine. Memory at high addresses gets mapped to a low dva.
>
> This is done at runtime on OpenBSD when DMA mappings are loaded or unloaded 
> by populating Translation Table Entries (TTEs). A TTE is effectively a table 
> or array mapping DVA pages to physical addresses. Generally device drivers 
> load and unload dma memory for every I/O or packet or so on.
>
> IOMMUs in sparc64s have some more features than this. Because they really are 
> between memory and the devices they can act as a gatekeeper for all memory 
> accesses. They also have a toggle that can allow a device to have direct or 
> passthru access to physical memory. If passthru is enabled, there's a special 
> address range that effectively maps all physical memory into a DVA range. 
> Devices can be pointed at it without having to manage TTEs. When passthru is 
> disabled, all accesses must go through TTEs.
>
> Currently OpenBSD disables passthru. The benefit is devices can't blindly 
> access sensitive memory unless it is explicitly shared. Note that this is how 
> it is on most architectures anyway. However, the consequence of managing the 
> TTEs is that it is expensive, and extremely so in some cases.
>
> dlg

Last iteration from me on this one.

Why is this not a problem on some other architectures?

I'd have thought DMA and hardware being assigned transitory addresses
(from memory allocator or other OS subsystem or driver) mostly is a
lower level phenomenon and memcpy normally applies on higher levels,
isn't it so - for networking for instance, mbuf's take over soon above
the driver level. Does OpenBSD have a pool of to-be-mbufs and it asks
network drivers to write received ethernet frames directly to them, and
similarly transmit ethernet frames directly from mbuf:s?

What potentially or clearly sensitive memory would passthru expose,
driver-owned structures only or all memory?



Re: bypass support for iommu on sparc64

2018-10-19 Thread Joseph Mayer
‐‐‐ Original Message ‐‐‐
On Friday, October 19, 2018 5:15 PM, Mark Kettenis  
wrote:

> > Date: Fri, 19 Oct 2018 10:22:30 +1000
> > From: David Gwynne da...@gwynne.id.au
> > On Wed, May 10, 2017 at 10:09:59PM +1000, David Gwynne wrote:
> >
> > > On Mon, May 08, 2017 at 11:03:58AM +1000, David Gwynne wrote:
> > >
> > > > on modern sparc64s (think fire or sparc enterprise Mx000 boxes),
> > > > setting up and tearing down the translation table entries (TTEs)
> > > > is very expensive. so expensive that the cost of doing it for disk
> > > > io has a noticable impact on compile times.
> > > > now that there's a BUS_DMA_64BIT flag, we can use that to decide
> > > > to bypass the iommu for devices that set that flag, therefore
> > > > avoiding the cost of handling the TTEs.

Question for the unintroduced, what's the scope here, TTE is Sparc's
page table and reconfiguring them at (process) context switch is
expensive and this suggestion removes the need for TTE:s for hardware
device access, but those don't change at context switch?



Re: Linux DRM

2018-09-03 Thread Joseph Mayer
Thomas,

On September 4, 2018 10:55 AM, Thomas de Grivel  wrote:

> Le lun. 3 sept. 2018 à 23:33, Philip Guenther guent...@gmail.com a écrit :
>
> > On Mon, Sep 3, 2018 at 11:46 AM Thomas de Grivel billi...@gmail.com wrote:
> >
> > > I was browsing the DRM code ported from Linux and it's a terrible
> > > mess, is there any ongoing project to clean up that codebase or
> > > rewrite it entirely ?

For the one who has not reviewed the code, can you quantify and
illustrate approximately how bad it is?

> > No. OpenBSD doesn't have the resources to reimplement the DRM subsystem or 
> > maintain a non-trivial fork of the Linux version. We don't want to get 
> > stuck with a code base that doesn't support close-to-current hardware, so 
> > the porting work has concentrated on minimizing the changes necessary to 
> > make the upstream code base work in OpenBSD.
> > It's clear that the hardware support in the upstream has large 
> > contributions from developers with inside access at the hardware vendors; 
> > without such access it's doubtful that all the hardware bugs^Wlimitations 
> > can be worked around with non-infinite resource.
> > Improvements in the DRM code itself should be done in the upstream, not 
> > just to minimize OpenBSD costs in this area, but so that all OSes that draw 
> > from that base can benefit.
>
> You probably do not care and actually neither do I but that current
> state of graphic hardware support code is crazy in my opinion.
> Computer graphic cards have to be the single most successful hardware
> in the history of computer hardware or even hardware in general and
> yet their drivers are a complete mess.

I agree this is unacceptable.

> It makes no sense to me. It all
> appears like a hideous obscurity-based false sense of security where
> you really cannot ensure the minimality of any driver and their
> features.

Common.

I guess any OS would benefit of a clean, open source, audited DRM
stack. This makes sense as a separate code project?

What's the quality of the exported interfaces? Satisfactory for
a higher-quality implementation to use it?



Re: acpi(4): GenericSerialBus OperationRegion support

2018-05-17 Thread Joseph Mayer
Hi Mike,

About the GPDPocket laptop specifically, my best awareness is that its
battery status reporting has been reported to be quirky, and that it
might work in some BIOS versions and not in other.

The GPDPocket's BIOS comes in two flavors, "unlocked" with more options
and "locked" with less.

To my best awareness the best "unlocked" one is dated 2017-06-28 [1],
and the best "locked" one is dated 2017-08-07 [2].

I think GPD's skill is hardware rather than software.

Below some related references [3]. Take care BIOS options as they could
brick the laptop.

Joseph

[1]
"P7-20170628-Ubuntu-BIOS.zip"

http://www.gpd.hk/news.asp?id=1519=002002

http://forum.gpd.hk/t135-as-of-21-august-2017-what-is-the-latest-unlocked-firmware-for-the-gpd-pocket

[2]
"P7 BIOS-20170807.zip"

[3]
http://forum.gpd.hk/t167-how-enable-battery-status-with-the-unlocked-bios-2017-06-28

http://forum.gpd.hk/t175-gpd-pocket-win-10-pro-and-charging-issues

https://www.reddit.com/r/GPDPocket/comments/6s7zck/my_unlocked_bios_working_settings_dptf_limit/

https://boards.dingoonity.org/gpd-windows-devices/gpd-win-either-not-charging-properly-or-battery-display-broken/

https://github.com/stockmind/gpd-pocket-ubuntu-respin , "It also let you boot 
on zero battery charge (previous versions require at least 15-20% of battery 
charge to boot)."

"Other than that your BIOS is known to have something different related to 
enumerating devices and management of battery that gave some trouble on past 
kernels" https://github.com/stockmind/gpd-pocket-ubuntu-respin/issues/63

2018-05-17 15:09 GMT+08:00 Mike Larkin :
> Just to follow up, this did not fix the battery issue (still "absent" and
> 0%) on the GPD.