"Can't use pipe_framebuffer_state as a hash key" Is this still relevant?
I thought we did this.

You're right that the 32-batch bitset is something we can integrate in a
v2.

I guess that's it, never mind. Good stuff regardless :)

On Tue, Sep 17, 2019 at 12:08:34AM +0200, Boris Brezillon wrote:
> On Mon, 16 Sep 2019 16:29:05 -0400
> Alyssa Rosenzweig <aly...@rosenzweig.io> wrote:
> 
> > > As a drive-by comment, in case you didn't know, the "standard"
> > > solution for avoiding flushing when BO's are written by the CPU (e.g.
> > > uniform buffer updates) as documented in ARM's performance guide is to
> > > add a copy-on-write mechanism, so that you have "shadow" BO's when the
> > > original BO is modified by the user. I believe this is implemented in
> > > freedreno, at least there was a talk about it at XDC a few years ago.  
> > 
> > Yes, this is implemented in freedreno. BO shadowing will be the next
> > step once this pipelining code settles down. For now, this series is
> > about eliminating the strict flushes between each and every frame of
> > each and every FBO that we currently occur now.
> > 
> > Boris, references on the freedreno model (which is the mesa gold
> > standard):
> > 
> > https://www.x.org/wiki/Events/XDC2016/Program/clark_ooo_rendering.pdf
> > https://bloggingthemonkey.blogspot.com/2016/07/dirty-tricks-for-moar-fps.html
> > 
> > The former presentation is definitely worth a read; evidently we've
> > already painted ourselves into some corners :p
> 
> I had a quick look at the presentation, and it looks pretty similar to
> what is being implemented in this series. The only difference I could
> spot is the limitation to 32 batches to avoid usage of sets in the BO
> access tracking logic, and that's still something I can change (I'd
> prefer to do that in a follow-up patch though).
> 
> What specific aspects do you think we got wrong?
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to