Re: [Mesa-dev] Allocator Nouveau driver, Mesa EXT_external_objects, and DRM metadata import interfaces
(Adding dri-devel back, and trying to respond to some comments from the different forks) James Jones wrote: > Your worst case analysis above isn't far off from our HW, give or take > some bits and axes here and there. We've started an internal discussion > about how to lay out all the bits we need. It's hard to even enumerate > them all without having a complete understanding of what capability sets > are going to include, a fully-optimized implementation of the mechanism > on our HW, and lot's of test scenarios though. (thanks James for most of the info below) To elaborate a bit, if we want to share an allocation across GPUs for 3D rendering, it seems we would need 12 bits to express our swizzling/tiling memory layouts for fermi+. In addition to that, maxwell uses 3 more bits for this, and we need an extra bit to identify pre-fermi representations. We also need one bit to differentiate between Tegra and desktop, and another one to indicate whether the layout is otherwise linear. Then things like whether compression is used (one more bit), and we can probably get by with 3 bits for the type of compression if we are creative. However, it'd be way easier to just track arch + page kind, which would be like 32 bits on its own. Whether Z-culling and/or zero-bandwidth-clears are used may be another 3 bits. If device-local properties are included, we might need a couple more bits for caching. We may also need to express locality information, which may take at least another 2 or 3 bits. If we want to share array textures too, you also need to pass the array pitch. Is it supposed to be encoded in a modifier too? That's 64 bits on its own. So yes, as James mentioned, with some effort, we could technically fit our current allocation parameters in a modifier, but I'm still not convinced this is as future proof as it could be as our hardware grows in capabilities. Daniel Stone wrote: > So I reflexively > get a bit itchy when I see the kernel being used to transit magic > blobs of data which are supplied by userspace, and only interpreted by > different userspace. Having tiling formats hidden away means that > we've had real-world bugs in AMD hardware, where we end up displaying > garbage because we cannot generically reason about the buffer > attributes. I'm a bit confused. Can't modifiers be specified by vendors and only interpreted by drivers? My understanding was that modifiers could actually be treated as opaque 64-bit data, in which case they would qualify as "magic blobs of data". Otherwise, it seems this wouldn't be scalable. What am I missing? Daniel Vetter wrote: > I think in the interim figuring out how to expose kms capabilities > better (and necessarily standardizing at least some of them which > matter at the compositor level, like size limits of framebuffers) > feels like the place to push the ecosystem forward. In some way > Miguel's proposal looks a bit backwards, since it adds the pitch > capabilities to addfb, but at addfb time you've allocated everything > already, so way too late to fix things up. With modifiers we've added > a very simple per-plane property to list which modifiers can be > combined with which pixel formats. Tiny start, but obviously very far > from all that we'll need. Not sure whether I might be misunderstanding your statement, but one of the allocator main features is negotiation of nearly optimal allocation parameters given a set of uses on different devices/engines by the capability merge operation. A client should have queried what every device/engine is capable of for the given uses, find the optimal set of capabilities, and use it for allocating a buffer. At the moment these parameters are given to KMS, they are expected to be good. If they aren't, the client didn't do things right. Rob Clark wrote: > It does seem like, if possible, starting out with modifiers for now at > the kernel interface would make life easier, vs trying to reinvent > both kernel and userspace APIs at the same time. Userspace APIs are > easier to change or throw away. Presumably by the time we get to the > point of changing kernel uabi, we are already using, and pretty happy > with, serialized liballoc data over the wire in userspace so it is > only a matter of changing the kernel interface. I guess we can indeed start with modifiers for now, if that's what it takes to get the allocator mechanisms rolling. However, it seems to me that we won't be able to encode the same type of information included in capability sets with modifiers in all cases. For instance, if we end up encoding usage transition information in capability sets, how that would translate to modifiers? I assume display doesn't really care about a lot of the data capability sets may encode, but is it correct to think of modifiers as things only display needs? If we are to treat modifiers as a first-class citizen, I would expect to use them beyond that. Kristian Kristensen wrote: > I agree and let me
Re: [Mesa-dev] Allocator Nouveau driver, Mesa EXT_external_objects, and DRM metadata import interfaces
Inline. On Wed, 20 Dec 2017 11:54:10 -0800 Kristian Høgsberg <hoegsb...@gmail.com> wrote: > On Wed, Dec 20, 2017 at 11:51 AM, Daniel Vetter <dan...@ffwll.ch> wrote: > > Since this also involves the kernel let's add dri-devel ... Yeah, I forgot. Thanks Daniel! > > > > On Wed, Dec 20, 2017 at 5:51 PM, Miguel Angel Vico <mvicom...@nvidia.com> > > wrote: > >> Hi all, > >> > >> As many of you already know, I've been working with James Jones on the > >> Generic Device Allocator project lately. He started a discussion thread > >> some weeks ago seeking feedback on the current prototype of the library > >> and advice on how to move all this forward, from a prototype stage to > >> production. For further reference, see: > >> > >> > >> https://lists.freedesktop.org/archives/mesa-dev/2017-November/177632.html > >> > >> From the thread above, we came up with very interesting high level > >> design ideas for one of the currently missing parts in the library: > >> Usage transitions. That's something I'll personally work on during the > >> following weeks. > >> > >> > >> In the meantime, I've been working on putting together an open source > >> implementation of the allocator mechanisms using the Nouveau driver for > >> all to be able to play with. > >> > >> Below I'm seeking feedback on a bunch of changes I had to make to > >> different components of the graphics stack: > >> > >> ** Allocator ** > >> > >> An allocator driver implementation on top of Nouveau. The current > >> implementation only handles pitch linear layouts, but that's enough > >> to have the kmscube port working using the allocator and Nouveau > >> drivers. > >> > >> You can pull these changes from > >> > >> > >> https://github.com/mvicomoya/allocator/tree/wip/mvicomoya/nouveau-driver > >> > >> ** Mesa ** > >> > >> James's kmscube port to use the allocator relies on the > >> EXT_external_objects extension to import allocator allocations to > >> OpenGL as a texture object. However, the Nouveau implementation of > >> these mechanisms is missing in Mesa, so I went ahead and added them. > >> > >> You can pull these changes from > >> > >> > >> https://github.com/mvicomoya/mesa/tree/wip/mvicomoya/EXT_external_objects-nouveau > >> > >> Also, James's kmscube port uses the NVX_unix_allocator_import > >> extension to attach allocator metadata to texture objects so the > >> driver knows how to deal with the imported memory. > >> > >> Note that there isn't a formal spec for this extension yet. For now, > >> it just serves as an experimental mechanism to import allocator > >> memory in OpenGL, and attach metadata to texture objects. > >> > >> You can pull these changes (written on top of the above) from: > >> > >> > >> https://github.com/mvicomoya/mesa/tree/wip/mvicomoya/NVX_unix_allocator_import > >> > >> ** kmscube ** > >> > >> Mostly minor fixes and improvements on top of James's port to use the > >> allocator. Main thing is the allocator initialization path will use > >> EGL_MESA_platform_surfaceless if EGLDevice platform isn't supported > >> by the underlying EGL implementation. > >> > >> You can pull these changes from: > >> > >> > >> https://github.com/mvicomoya/kmscube/tree/wip/mvicomoya/allocator-nouveau > >> > >> > >> With all the above you should be able to get kmscube working using the > >> allocator on top of the Nouveau driver. > >> > >> > >> Another of the missing pieces before we can move this to production is > >> importing allocations to DRM FB objects. This is probably one of the > >> most sensitive parts of the project as it requires modification/addition > >> of kernel driver interfaces. > >> > >> At XDC2017, James had several hallway conversations with several people > >> about this, all having different opinions. I'd like to take this > >> opportunity to also start a discussion about what's the best option to > >> create a path to get allocator allocations added as DRM FB objects. > >> > >> These are the few options we've considered to start with: > >> > >> A) Have vendor-private ioctls to set properties on GEM
Re: XDC 2017 feedback
Hi, In general, I think the organization was great. I agree having everything happen in a single room was a good point. Here's some of my personal feedback: * I didn't like the tables layout at all. I found it to be extremely uncomfortable to have to look sideways in order to see the screens and presenter. * There were a very few power strips, and not well distributed along the tables. Also, this is what I've been able to gather from some of my colleagues here at NVIDIA:: * Some people watching the conference remotely complained about the stream quality, and the recordings wouldn't include the presenter. In one of the hallway conversations, Martin mentioned in XDC2016 they had a team of camera experts doing the job, and will try to improve that part in future XDC's. * The microphone/audio situation wasn't great either. I don't know how practical it is with something the size of XDC, but at Khronos meetings, they usually set up microphones for the audience too, with tap-on/tap-off switches, and a ratio of 1:2 or 1:3 for microphones:people. That makes Q a lot easier. Alternatively, having a question queue rather than running mics around the room can speed things up, but makes cross-talk harder. * The table/chair layout was really cumbersome. Beyond just the orientation, some people had a lot of trouble getting in/out to use the restroom, grab snacks, etc. On a positive note: * More time for hallway conversations was in general a good thing. Some of us got a ton of useful feedback talking to others. * The food was great, and having food on-site gave us even more time for hallway-tracking. * Surprisingly, parking was not an issue. * Signage was good, and the security guards were polite/helpful. It was easy to find the room and get our badges. * The wifi worked great in general, which is a rarity at conferences. It was pretty spotty at XDC 2016. Thank you. On Tue, 26 Sep 2017 11:49:10 -0700 Manasi Navarewrote: > Hi, > > XDC was a really great conference and loved the fact that it was > in just one room which kept all the hallway conversations in that room > resulting > into more networking. > But I agree with Andres that for the videos, it would be great to split the > huge youtube video stream per presentation and have seperate links for each > talk somewhere on the XDC page. > > Regards > Manasi > > > > On Tue, Sep 26, 2017 at 01:25:15PM -0400, Andres Rodriguez wrote: > > Hi, > > > > A small piece of feedback from those of us watching remotely. It would be > > nice to have a simple to access index for the long livestream videos. > > > > I think the XDC 2017 wiki page would be a good place for this information. A > > brief example: > > > > Presentation Title | Presenter Name | Link with timestamp > > ---||- > > ...| ...| youtu.be/video-id?t=XhYmZs > > > > Or alternatively, a list of hyperlinks with the presentation title as text > > that point to the correct timestamp in the video would also be sufficient. > > > > Really enjoyed the talks, and would like them to be slightly easier to > > access and share with others. > > > > Regards, > > Andres > > > > > > On 2017-09-26 12:57 PM, Daniel Vetter wrote: > > >Hi all, > > > > > >First again big thanks to Stéphane and Jennifer for organizing a great XDC. > > > > > >Like last year we'd like to hear feedback on how this year's XDC went, > > >both the good (and what you'd like to see more of) and the not so > > >good. Talk selection, organization, location, scheduling of talks, > > >anything really. > > > > > >Here's a few things we took into account from Helsinki and tried to apply: > > >- More breaks for more hallway track. > > >- Try to schedule the hot topics on the first day (did we pick the > > >right ones) for better hallway track. > > >- More lightning talk time to give all the late/rejected submissions > > >some place to give a quick showcase. > > > > > >Things that didn't work out perfectly this year and that we'll try to > > >get better at next year: > > >- Lots of people missed the submission deadline and their talks were > > >rejected only because of that. We'll do better PR by sending out a > > >pile of reminders. > > >- Comparitively few people asked for travel assistance. No idea > > >whether this was a year with more budget around, or whether this isn't > > >all that well know and we need to make more PR in maybe the call for > > >papers about it. > > > > > >But that's just the stuff we've gathered already, we'd like to hear > > >more feedback. Just reply to this mail or send a mail to > > >bo...@foundation.x.org if you don't want the entire world to read it. > > >And if you want to send at pseudonymous feedback, drop a pastebin onto > > >the #xf-bod channel on the OFTC irc server. > > > > > >Thanks, Daniel > > > > >