Re: time for amber2 branch?

2024-06-20 Thread Faith Ekstrand
On Thu, Jun 20, 2024 at 12:30 PM Adam Jackson  wrote:

> On Thu, Jun 20, 2024 at 10:20 AM Erik Faye-Lund <
> erik.faye-l...@collabora.com> wrote:
>
>> When we did Amber, we had a lot better reason to do so than "these
>> drivers cause pain when doing big tree updates". The maintenance burden
>> imposed by the drivers proposed for removal here is much, much smaller,
>> and doesn't really let us massively clean up things in a way comparable
>> to last time.
>>
>
> Yeah, amber was primarily about mothballing src/mesa/drivers/ in my
> opinion. It happened to correlate well with the GL 1.x vs 2.0 generational
> divide, but that was largely because we had slowly migrated all the GL2
> hardware to gallium drivers (iris and crocus and i915g and r300g were a lot
> of work, let's do remember), so the remaining "classic" drivers were only
> the best choice for fixed function hardware. Nice bright line in the sand,
> there, between the register bank of an overgrown SGI Indy as your state
> vector, and the threat of a Turing-complete shader engine.
>
> I have a harder time finding that line in the sand today. ES3? Compute
> shaders? Vulkan 1.0? I'm not sure any of these so fundamentally change the
> device programming model, or the baseline API assumptions, that we would
> benefit by requiring it of the hardware. I'm happy to be wrong about that!
> We're using compute shaders internally in more and more ways, for example,
> maybe being able to assume them would be a win. If there's a better design
> to be had past some feature level, then by all means let's have that
> discussion.
>
> But if the issue is we don't like how many drivers there are then I am
> sorry but at some level that is simply the dimension of the problem. Mesa's
> breadth of hardware coverage is at the core of its success. You'd be
> hard-pressed to find a GLES1 part anymore, but there are brand-new systems
> with Mali-400 MP GPUs, and there's no reason the world's finest GLES2
> implementation should stop working there.
>

Same. I kinda think the next major cut will be when we go Vulkan-only and
leave Zink and a bunch of legacy drivers in a GL branch. That's probably
not going to happen for another 5 years at least.

~Faith


Re: Queries regarding the Khronos specification license

2024-02-06 Thread Faith Ekstrand
What do you mean "re-implementing parts of the Vulkan APIs"?

I have a feeling someone is confused about licensing

Also, I'm NOT a lawyer. What follows does not constitute legal advice and
anyone who's actually concerned about the legal implications should consult
an actual lawyer who is familiar with international copyright law.

Okay, with that said, I don't think anyone in the OpenBSD world has
anything to worry about. The only thing being distributed under Apache 2.0
are the Vulkan headers, not any of the driver code implementing Vulkan. If
someone is that worried about the license of a handful of headers, I think
they should just go through the OpenBSD approval process and get them
approved.  I guess you'll have to also get the loader approved but I doubt
there's any real problems there, either.

A for Khronos copyright. Yes, we should be good there. Most Vulkan drivers
in Mesa are conformant implementations and therefore it's fine to use the
Vulkan trademark in association with them.

If you're talking about making a different spec that looks like Vulkan and
uses different names for everything just to avoid the Apache 2.0 license,
don't do that. That will get you in trouble with Khronos and I will NAK any
attempt to do that in Mesa.

Again, I'm not a lawyer.

~Faith

On Tue, Feb 6, 2024 at 11:52 AM Paianis  wrote:

> Hi all,
>
> For context, I'm interested in re-implementing parts of the Vulkan APIs
> (at least the parts necessary to develop a Wayland compositor), as the
> OpenBSD project won't accept Apache License 2.0 for code except when it
> is deemed unavoidable (LLVM), and the Khronos' APIs use this license.
>
> The Khronos specifications for Vulkan and later OpenGL versions use this
> license:
>
>
> https://www.khronos.org/legal/Khronos_Specification_Copyright_License_Header
>
> Have Mesa3D developers had to use the specifications under
> registry.khronos.org for other Khronos standards, and if so, has written
> permission to use them and Khronos trademarks ever been sought?
>
> If I've understood correctly, Mesa3D currently has Vulkan drivers for
> some GPUs in various stages of progress, but not a re-implementation of
> the Vulkan APIs. Would it be an acceptable home for this (under the MIT
> License), or should such a project be separate for now?
>
> Thanks
>
> Paianis
>


Re: Future direction of the Mesa Vulkan runtime (or "should we build a new gallium?")

2024-01-25 Thread Faith Ekstrand
On Thu, Jan 25, 2024 at 5:06 PM Gert Wollny  wrote:

> Hi,
>
> thanks, Faith, for bringing this discussion up.
>
> I think with Venus we are more interested in using utility libraries on
> an as-needed basis. Here, most of the time the Vulkan commands are just
> serialized according to the Venus protocol and this is then passed to
> the host because usually it wouldn't make sense to let the guest
> translate the Vulkan commands to something different (e.g. something
> that is commonly used in a runtime), only to then re-encode this in the
> Venus driver to satisfy the host Vulkan driver -  just think Spir-V:
> why would we want to have NIR only to then re-encode it to Spir-V?
>

I think Venus is an entirely different class of driver. It's not even
really a driver. It's more of a Vulkan layer that has a VM boundary in the
middle. It's attempting to be as thin of a Vulkan -> Vulkan pass-through as
possible. As such, it doesn't use most of the shared stuff anyway. It uses
the dispatch framework and that's really about it. As long as that code
stays in-tree roughly as-is, I think Venus will be fine.


> I'd also like to give a +1 to the points raised by Triang3l and others
> about the potential of breaking other drivers. I've certainly be bitten
> by this on the Gallium side with r600, and unfortunately I can't set up
> a CI in my home office (and after watching the XDC talk about setting
> up your own CI I was even more discouraged to do this).
>

That's a risk with all common code. You could raise the same risk with NIR
or basically anything else. Sure, if someone wants to go write all the code
themselves in an attempt to avoid bugs, I guess they're free to do that. I
don't really see that as a compelling argument, though. Also, while you
experienced gallium breakage with r600, having worked on i965, I can
guarantee you that that's still better than maintaining a classic
(non-gallium) GL driver. 

At the moment, given the responses I've seen and the scope of the project
as things are starting to congeal in my head, I don't think this will be an
incremental thing where drivers get converted as we go anymore. If we
really do want to flip the flow, I think it'll be invasive enough that
we'll build gallium2 and then people can port to it if they want. I may
port a driver or two myself but those will be things I own or am at least
willing to deal with the bug fallout for. Others can port or not at-will.

This is what I meant when I said elsewhere that we're probably heading
towards a gallium/classic situation again. I don't expect anyone to port
until the benefits outweigh the costs but I do expect the benefits will be
there eventually.

~Faith


Re: Future direction of the Mesa Vulkan runtime (or "should we build a new gallium?")

2024-01-25 Thread Faith Ekstrand
On Thu, Jan 25, 2024 at 8:57 AM Jose Fonseca 
wrote:

> > So far, we've been trying to build those components in terms of the
> Vulkan API itself with calls jumping back into the dispatch table to try
> and get inside the driver. This is working but it's getting more and more
> fragile the more tools we add to that box. A lot of what I want to do with
> gallium2 or whatever we're calling it is to fix our layering problems so
> that calls go in one direction and we can untangle the jumble. I'm still
> not sure what I want that to look like but I think I want it to look a lot
> like Vulkan, just with a handier interface.
>
> That resonates with my experience.  For example, Galllium draw module does
> some of this too -- it provides its own internal interfaces for drivers,
> but it also loops back into Gallium top interface to set FS and rasterizer
> state -- and that has *always* been a source of grief.  Having control
> flow proceeding through layers in one direction only seems an important
> principle to observe.  It's fine if the lower interface is the same
> interface (e.g., Gallium to Gallium, or Vulkan to Vulkan as you allude),
> but they shouldn't be the same exact entry-points/modules (ie, no
> reentrancy/recursion.)
>
> It's also worth considering that Vulkan extensibility could come in hand
> too in what you want to achieve.  For example, Mesa Vulkan drivers could
> have their own VK_MESA_internal_ extensions that could be used by the
> shared Vulkan code to do lower level things.
>

We already do that for a handful of things. The fact that Vulkan doesn't
ever check the stuff in the pNext chain is really useful for that. 

~Faith


> Jose
>
>
> On Wed, Jan 24, 2024 at 3:26 PM Faith Ekstrand 
> wrote:
>
>> Jose,
>>
>> Thanks for your thoughts!
>>
>> On Wed, Jan 24, 2024 at 4:30 AM Jose Fonseca 
>> wrote:
>> >
>> > I don't know much about the current Vulkan driver internals to have or
>> provide an informed opinion on the path forward, but I'd like to share my
>> backwards looking perspective.
>> >
>> > Looking back, Gallium was two things effectively:
>> > (1) an abstraction layer, that's watertight (as in upper layers
>> shouldn't reach through to lower layers)
>> > (2) an ecosystem of reusable components (draw, util, tgsi, etc.)
>> >
>> > (1) was of course important -- and the discipline it imposed is what
>> enabled to great simplifications -- but it also became a straight-jacket,
>> as GPUs didn't stand still, and sooner or later the
>> see-every-hardware-as-the-same lenses stop reflecting reality.
>> >
>> > If I had to pick one, I'd say that (2) is far more useful and
>> practical.Take components like gallium's draw and other util modules. A
>> driver can choose to use them or not.  One could fork them within Mesa
>> source tree, and only the drivers that opt-in into the fork would need to
>> be tested/adapted/etc
>> >
>> > On the flip side, Vulkan API is already a pretty low level HW
>> abstraction.  It's also very flexible and extensible, so it's hard to
>> provide a watertight abstraction underneath it without either taking the
>> lowest common denominator, or having lots of optional bits of functionality
>> governed by a myriad of caps like you alluded to.
>>
>> There is a third thing that isn't really recognized in your description:
>>
>> (3) A common "language" to talk about GPUs and data structures that
>> represent that language
>>
>> This is precisely what the Vulkan runtime today doesn't have. Classic
>> meta sucked because we were trying to implement GL in GL. u_blitter,
>> on the other hand, is pretty fantastic because Gallium provides a much
>> more sane interface to write those common components in terms of.
>>
>> So far, we've been trying to build those components in terms of the
>> Vulkan API itself with calls jumping back into the dispatch table to
>> try and get inside the driver. This is working but it's getting more
>> and more fragile the more tools we add to that box. A lot of what I
>> want to do with gallium2 or whatever we're calling it is to fix our
>> layering problems so that calls go in one direction and we can
>> untangle the jumble. I'm still not sure what I want that to look like
>> but I think I want it to look a lot like Vulkan, just with a handier
>> interface.
>>
>> ~Faith
>>
>> > Not sure how useful this is in practice to you, but the lesson from my
>> POV is that opt-in reusable and shared libraries are always time well spent
>> as they can bend and adapt with the t

Re: Future direction of the Mesa Vulkan runtime (or "should we build a new gallium?")

2024-01-24 Thread Faith Ekstrand
On Wed, Jan 24, 2024 at 12:26 PM Zack Rusin  wrote:
>
> On Wed, Jan 24, 2024 at 10:27 AM Faith Ekstrand 
wrote:
> >
> > Jose,
> >
> > Thanks for your thoughts!
> >
> > On Wed, Jan 24, 2024 at 4:30 AM Jose Fonseca 
wrote:
> > >
> > > I don't know much about the current Vulkan driver internals to have
or provide an informed opinion on the path forward, but I'd like to share
my backwards looking perspective.
> > >
> > > Looking back, Gallium was two things effectively:
> > > (1) an abstraction layer, that's watertight (as in upper layers
shouldn't reach through to lower layers)
> > > (2) an ecosystem of reusable components (draw, util, tgsi, etc.)
> > >
> > > (1) was of course important -- and the discipline it imposed is what
enabled to great simplifications -- but it also became a straight-jacket,
as GPUs didn't stand still, and sooner or later the
see-every-hardware-as-the-same lenses stop reflecting reality.
> > >
> > > If I had to pick one, I'd say that (2) is far more useful and
practical.Take components like gallium's draw and other util modules. A
driver can choose to use them or not.  One could fork them within Mesa
source tree, and only the drivers that opt-in into the fork would need to
be tested/adapted/etc
> > >
> > > On the flip side, Vulkan API is already a pretty low level HW
abstraction.  It's also very flexible and extensible, so it's hard to
provide a watertight abstraction underneath it without either taking the
lowest common denominator, or having lots of optional bits of functionality
governed by a myriad of caps like you alluded to.
> >
> > There is a third thing that isn't really recognized in your description:
> >
> > (3) A common "language" to talk about GPUs and data structures that
> > represent that language
> >
> > This is precisely what the Vulkan runtime today doesn't have. Classic
> > meta sucked because we were trying to implement GL in GL. u_blitter,
> > on the other hand, is pretty fantastic because Gallium provides a much
> > more sane interface to write those common components in terms of.
> >
> > So far, we've been trying to build those components in terms of the
> > Vulkan API itself with calls jumping back into the dispatch table to
> > try and get inside the driver. This is working but it's getting more
> > and more fragile the more tools we add to that box. A lot of what I
> > want to do with gallium2 or whatever we're calling it is to fix our
> > layering problems so that calls go in one direction and we can
> > untangle the jumble. I'm still not sure what I want that to look like
> > but I think I want it to look a lot like Vulkan, just with a handier
> > interface.
>
> Yes, that makes sense. When we were writing the initial components for
> gallium (draw and cso) I really liked the general concept and thought
> about trying to reuse them in the old, non-gallium Mesa drivers but
> the obstacle was that there was no common interface to lay them on.
> Using GL to implement GL was silly and using Vulkan to implement
> Vulkan is not much better.
>
> Having said that my general thoughts on GPU abstractions largely match
> what Jose has said. To me it's a question of whether a clean
> abstraction:
> - on top of which you can build an entire GPU driver toolkit (i.e. all
> the components and helpers)
> - that makes it trivial to figure up what needs to be done to write a
> new driver and makes bootstrapping a new driver a lot simpler
> - that makes it easier to reason about cross hardware concepts (it's a
> lot easier to understand the entirety of the ecosystem if every driver
> is not doing something unique to implement similar functionality)
> is worth more than almost exponentially increasing the difficulty of:
> - advancing the ecosystem (i.e. it might be easier to understand but
> it's way harder to create clean abstractions across such different
> hardware).
> - driver maintenance (i.e. there will be a constant stream of
> regressions hitting your driver as a result of other people working on
> their drivers)
> - general development (i.e. bug fixes/new features being held back
> because they break some other driver)
>
> Some of those can certainly be titled one way or the other, e.g. the
> driver maintenance con be somewhat eased by requiring that every
> driver working on top of the new abstraction has to have a stable
> Mesa-CI setup (be it lava or ci-tron, or whatever) but all of those
> things need to be reasoned about. In my experience abstractions never
> have uniform support because some people will value cons of them more
> than they value the pros. So the entire process r

Re: Future direction of the Mesa Vulkan runtime (or "should we build a new gallium?")

2024-01-24 Thread Faith Ekstrand
Jose,

Thanks for your thoughts!

On Wed, Jan 24, 2024 at 4:30 AM Jose Fonseca  wrote:
>
> I don't know much about the current Vulkan driver internals to have or 
> provide an informed opinion on the path forward, but I'd like to share my 
> backwards looking perspective.
>
> Looking back, Gallium was two things effectively:
> (1) an abstraction layer, that's watertight (as in upper layers shouldn't 
> reach through to lower layers)
> (2) an ecosystem of reusable components (draw, util, tgsi, etc.)
>
> (1) was of course important -- and the discipline it imposed is what enabled 
> to great simplifications -- but it also became a straight-jacket, as GPUs 
> didn't stand still, and sooner or later the see-every-hardware-as-the-same 
> lenses stop reflecting reality.
>
> If I had to pick one, I'd say that (2) is far more useful and practical.
> Take components like gallium's draw and other util modules. A driver can 
> choose to use them or not.  One could fork them within Mesa source tree, and 
> only the drivers that opt-in into the fork would need to be tested/adapted/etc
>
> On the flip side, Vulkan API is already a pretty low level HW abstraction.  
> It's also very flexible and extensible, so it's hard to provide a watertight 
> abstraction underneath it without either taking the lowest common 
> denominator, or having lots of optional bits of functionality governed by a 
> myriad of caps like you alluded to.

There is a third thing that isn't really recognized in your description:

(3) A common "language" to talk about GPUs and data structures that
represent that language

This is precisely what the Vulkan runtime today doesn't have. Classic
meta sucked because we were trying to implement GL in GL. u_blitter,
on the other hand, is pretty fantastic because Gallium provides a much
more sane interface to write those common components in terms of.

So far, we've been trying to build those components in terms of the
Vulkan API itself with calls jumping back into the dispatch table to
try and get inside the driver. This is working but it's getting more
and more fragile the more tools we add to that box. A lot of what I
want to do with gallium2 or whatever we're calling it is to fix our
layering problems so that calls go in one direction and we can
untangle the jumble. I'm still not sure what I want that to look like
but I think I want it to look a lot like Vulkan, just with a handier
interface.

~Faith

> Not sure how useful this is in practice to you, but the lesson from my POV is 
> that opt-in reusable and shared libraries are always time well spent as they 
> can bend and adapt with the times, whereas no opt-out watertight abstractions 
> inherently have a shelf life.
>
> Jose
>
> On Fri, Jan 19, 2024 at 5:30 PM Faith Ekstrand  wrote:
>>
>> Yeah, this one's gonna hit Phoronix...
>>
>> When we started writing Vulkan drivers back in the day, there was this
>> notion that Vulkan was a low-level API that directly targets hardware.
>> Vulkan drivers were these super thin things that just blasted packets
>> straight into the hardware. What little code was common was small and
>> pretty easy to just copy+paste around. It was a nice thought...
>>
>> What's happened in the intervening 8 years is that Vulkan has grown. A lot.
>>
>> We already have several places where we're doing significant layering.
>> It started with sharing the WSI code and some Python for generating
>> dispatch tables. Later we added common synchronization code and a few
>> vkFoo2 wrappers. Then render passes and...
>>
>> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27024
>>
>> That's been my project the last couple weeks: A common VkPipeline
>> implementation built on top of an ESO-like interface. The big
>> deviation this MR makes from prior art is that I make no attempt at
>> pretending it's a layered implementation. The vtable for shader
>> objects looks like ESO but takes its own path when it's useful to do
>> so. For instance, shader creation always consumes NIR and a handful of
>> lowering passes are run for you. It's no st_glsl_to_nir but it is a
>> bit opinionated. Also, a few of the bits that are missing from ESO
>> such as robustness have been added to the interface.
>>
>> In my mind, this marks a pretty fundamental shift in how the Vulkan
>> runtime works, at least in my mind. Previously, everything was
>> designed to be a toolbox where you can kind of pick and choose what
>> you want to use. Also, everything at least tried to act like a layer
>> where you still implemented Vulkan but you could leave out bits like
>> render passes if you implemented the new thing and were okay with the
>> la

Re: Future direction of the Mesa Vulkan runtime (or "should we build a new gallium?")

2024-01-22 Thread Faith Ekstrand
On Mon, Jan 22, 2024 at 7:20 AM Iago Toral  wrote:
>
> Hi Faith,
>
> thanks for starting the discussion, we had a bit of an internal chat
> here at Igalia to see where we all stand on this and I am sharing some
> initial thoughts/questions below:
>
> El vie, 19-01-2024 a las 11:01 -0600, Faith Ekstrand escribió:
>
> > Thoughts?
>
> We think it is fine if the Vulkan runtime implements its own internal
> API that doesn't match Vulkan's. If we are going down this path however
> we really want to make sure we have good documentation for it so it is
> clear how all that works without having to figure things out by looking
> at the code.

That's a reasonable request. We probably won't re-type the Vulkan spec
in comments but having differences documented is reasonable.  I'm
thinking the level of documentation in vk_graphics_state.

> For existing drivers we think it is a bit less clear whether the effort
> required to port is going to be worth it. If you end up having to throw
> away a lot of what you currently have that already works and in some
> cases might even be optimal for your platform it may be a hard ask.
> What are your thoughts on this? How much adoption would you be looking
> for from existing drivers?

That's a good question. One of the problems I'm already seeing is that
we have a bunch of common stuff which is in use in some drivers and
not in others and I generally don't know why. If there's something
problematic about it on some vendor's hardware, we should fix that. If
it's just that driver teams don't have the time for refactors, that's
a different issue. Unfortunately, I usually don't know besides one-off
comments from a developer here and there.

And, yeah, I know it can be a lot of work.  Hopefully the work pays
off in the long run but short-term it's often hard to justify. :-/

> As new features are added to the runtime, we understand some of them
> could have dependencies on other features, building on top of them,
> requiring drivers to adopt more of the common vulkan runtime to
> continue benefiting from additional features, is that how you see this
> or would you still expect many runtime features to still be independent
> from each other to facilitate driver opt-in on a need-by-need basis?

At a feature level, yes. However, one of the big things I'm struggling
with right now is layering issues where we really need to flip things
around from the driver calling into the runtime to the runtime calling
into the driver. One of the things I would LOVE to put in the runtime
is YCbCr emulation for drivers that don't natively have multi-plane
image support. However, that just isn't possible today thanks to the
way things are layered. In particular, we would need the runtime to be
able to make one `VkImage` contain multiple driver images and that's
just not possible as long as the driver is controlling image creation.
We also don't have enough visibility into descriptor sets. People have
also talked about trying to do a common ray-tracing implementation.
Unfortunately, I just don't see that happening with the current layer
model.

Unfortunately, I don't have a lot of examples of what that would look
like without having written the code to do it. One thing I'm currently
thinking about is switching more objects to a kernel vtable model like
I did with `vk_pipeline` and `vk_shader` in the posted MR. This puts
the runtime in control of the object's life cycle and more easily
allows for multiple implementations of an object type. Like right now
you can use the common implementation for graphics and compute and
roll your own vk_pipeline for ray-tracing. I realize that doesn't
really apply to Raspberry Pi but it's an example of what flipping the
layering around looks like.

The other thing I've been realizing as I've been thinking about this
over the week-end is that, if this happens, we're likely heading
towards another gallium/classic split for a while. (Though hopefully
without the bad blood in the community that we had from gallium.) If
this plays out similarly to gallium/classic, a bunch of drivers will
remain classic, doing most things themselves and the new thing (which
really needs a name, BTW) will be driven by a small subset of drivers
and then other drivers get moved over as time allows. This isn't
necessarily a bad thing, it's just a recognition of how large-scale
changes tend to roll out within Mesa and the potential scope of a more
invasive runtime project.

Thinking of it this way would also give more freedom to the people
building the new thing to just build it without worrying about driver
porting and trying to do everything incrementally. If we do attempt
this, it needs to be done with a subset of drivers that is as
representative of the industry as possible so we don't screw anybody
over. I'm currently thinking NVK (1.3, all the features), AGX (all the
features but on shit hardware), an

Future direction of the Mesa Vulkan runtime (or "should we build a new gallium?")

2024-01-19 Thread Faith Ekstrand
Yeah, this one's gonna hit Phoronix...

When we started writing Vulkan drivers back in the day, there was this
notion that Vulkan was a low-level API that directly targets hardware.
Vulkan drivers were these super thin things that just blasted packets
straight into the hardware. What little code was common was small and
pretty easy to just copy+paste around. It was a nice thought...

What's happened in the intervening 8 years is that Vulkan has grown. A lot.

We already have several places where we're doing significant layering.
It started with sharing the WSI code and some Python for generating
dispatch tables. Later we added common synchronization code and a few
vkFoo2 wrappers. Then render passes and...

https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27024

That's been my project the last couple weeks: A common VkPipeline
implementation built on top of an ESO-like interface. The big
deviation this MR makes from prior art is that I make no attempt at
pretending it's a layered implementation. The vtable for shader
objects looks like ESO but takes its own path when it's useful to do
so. For instance, shader creation always consumes NIR and a handful of
lowering passes are run for you. It's no st_glsl_to_nir but it is a
bit opinionated. Also, a few of the bits that are missing from ESO
such as robustness have been added to the interface.

In my mind, this marks a pretty fundamental shift in how the Vulkan
runtime works, at least in my mind. Previously, everything was
designed to be a toolbox where you can kind of pick and choose what
you want to use. Also, everything at least tried to act like a layer
where you still implemented Vulkan but you could leave out bits like
render passes if you implemented the new thing and were okay with the
layer. With the ESO code, you implement something that isn't Vulkan
entrypoints and the actual entrypoints live in the runtime. This lets
us expand and adjust the interface as needed for our purposes as well
as sanitize certain things even in the modern API.

The result is that NVK is starting to feel like a gallium driver. 

So here's the question: do we like this? Do we want to push in this
direction? Should we start making more things work more this way? I'm
not looking for MRs just yet nor do I have more reworks directly
planned. I'm more looking for thoughts and opinions as to how the
various Vulkan driver teams feel about this. We'll leave the detailed
planning for the Mesa issue tracker.

It's worth noting that, even though I said we've tried to keep things
layerish, there are other parts of the runtime that look like this.
The synchronization code is a good example. The vk_sync interface is
pretty significantly different from the Vulkan objects it's used to
implement. That's worked out pretty well, IMO. With as complicated as
something like pipelines or synchronization are, trying to keep the
illusion of a layer just isn't practical.

So, do we like this? Should we be pushing more towards drivers being a
backed of the runtime instead of a user of it?

Now, before anyone asks, no, I don't really want to build a multi-API
abstraction with a Vulkan state tracker. If we were doing this 5 years
ago and Zink didn't already exist, one might be able to make an
argument for pushing in that direction. However, that would add a huge
amount of weight to the project and make it even harder to develop the
runtime than it already is and for little benefit at this point.

Here's a few other constraints on what I'm thinking:

1. I want it to still be possible for drivers to implement an
extension without piles of runtime plumbing or even bypass the runtime
on occasion as needed.

2. I don't want to recreate the gallium cap disaster drivers should
know exactly what they're advertising. We may want to have some
internal features or properties that are used by the runtime to make
decisions but they'll be in addition to the features and properties in
Vulkan.

3. We've got some meta stuff already but we probably want more.
However, I don't want to force meta on folks who don't want it.

The big thing here is that if we do this, I'm going to need help. I'm
happy to do a lot of the architectural work but drivers are going to
have to keep up with the changes and I can't take on the burden of
moving 8 different drivers forward. I can answer questions and maybe
help out a bit but the refactoring is going to be too much for one
person, even if that person is me.

Thoughts?

~Faith


Re: How to get developer access to mesa gitlab repo ?

2023-09-13 Thread Faith Ekstrand
This was fine. It's mostly that those requests often get lost in the flood 
if email traffic coming out of GitLab. Sending an email or pinging on IRC 
works. You've got access now.


~Faith



On September 13, 2023 22:05:12 Feng Jiang  wrote:

Dear All!

I didn't find some tips or instructions in https://docs.mesa3d.org/ . By 
browsing the site, it looks like you need to submit an issue called 'Mesa 
Developer Access' and then attach a shortlog.


So I tried submitted an issue 
(https://gitlab.freedesktop.org/mesa/mesa/-/issues/9094), but it didn't get 
approved, is there a strict number of shortlogs and votes that are required?


Any suggestions are greatly appreciated, and I apologize if it's intrusive!