Re: [PATCH 00/49] DRM driver for Hikey 970

2020-08-24 Thread Dave Airlie
On Thu, 20 Aug 2020 at 20:02, Laurent Pinchart
 wrote:
>
> Hi Mauro,
>
> On Thu, Aug 20, 2020 at 09:03:26AM +0200, Mauro Carvalho Chehab wrote:
> > Em Wed, 19 Aug 2020 12:52:06 -0700 John Stultz escreveu:
> > > On Wed, Aug 19, 2020 at 8:31 AM Laurent Pinchart wrote:
> > > > On Wed, Aug 19, 2020 at 05:21:20PM +0200, Sam Ravnborg wrote:
> > > > > On Wed, Aug 19, 2020 at 01:45:28PM +0200, Mauro Carvalho Chehab wrote:
> > > > > > This patch series port the out-of-tree driver for Hikey 970 (which
> > > > > > should also support Hikey 960) from the official 96boards tree:
> > > > > >
> > > > > >https://github.com/96boards-hikey/linux/tree/hikey970-v4.9
> > > > > >
> > > > > > Based on his history, this driver seems to be originally written
> > > > > > for Kernel 4.4, and was later ported to Kernel 4.9. The original
> > > > > > driver used to depend on ION (from Kernel 4.4) and had its own
> > > > > > implementation for FB dev API.
> > > > > >
> > > > > > As I need to preserve the original history (with has patches from
> > > > > > both HiSilicon and from Linaro),  I'm starting from the original
> > > > > > patch applied there. The remaining patches are incremental,
> > > > > > and port this driver to work with upstream Kernel.
> > > > > >
> > > ...
> > > > > > - Due to legal reasons, I need to preserve the authorship of
> > > > > >   each one responsbile for each patch. So, I need to start from
> > > > > >   the original patch from Kernel 4.4;
> > > ...
> > > > > I do acknowledge you need to preserve history and all -
> > > > > but this patchset is not easy to review.
> > > >
> > > > Why do we need to preserve history ? Adding relevant Signed-off-by and
> > > > Co-developed-by should be enough, shouldn't it ? Having a public branch
> > > > that contains the history is useful if anyone is interested, but I don't
> > > > think it's required in mainline.
> > >
> > > Yea. I concur with Laurent here. I'm not sure what legal reasoning you
> > > have on this but preserving the "absolute" history here is actively
> > > detrimental for review and understanding of the patch set.
> > >
> > > Preserving Authorship, Signed-off-by lines and adding Co-developed-by
> > > lines should be sufficient to provide both atribution credit and DCO
> > > history.
> >
> > I'm not convinced that, from legal standpoint, folding things would
> > be enough. See, there are at least 3 legal systems involved here
> > among the different patch authors:
> >
> >   - civil law;
> >   - common law;
> >   - customary law + common law.
> >
> > Merging stuff altogether from different law systems can be problematic,
> > and trying to discuss this with experienced IP property lawyers will
> > for sure take a lot of time and efforts. I also bet that different
> > lawyers will have different opinions, because laws are subject to
> > interpretation. With that matter I'm not aware of any court rules
> > with regards to folded patches. So, it sounds to me that folding
> > patches is something that has yet to be proofed in courts around
> > the globe.
> >
> > At least for US legal system, it sounds that the Country of
> > origin of a patch is relevant, as they have a concept of
> > "national technology" that can be subject to export regulations.
> >
> > From my side, I really prefer to play safe and stay out of any such
> > legal discussions.
>
> Let's be serious for a moment. If you think there are legal issues in
> taking GPL-v2.0-only patches and squashing them while retaining
> authorship information through tags, the Linux kernel if *full* of that.
> You also routinely modify patches that you commit to the media subsystem
> to fix "small issues".
>
> The country of origin argument makes no sense either, the kernel code
> base if full of code coming from pretty much all country on the planet.
>
> Keeping the patches separate make this hard to review. Please squash
> them.

I'm inclined to agree with Laurent here.

Patches submitted as GPL-v2 with DCO lines and author names/companies
should be fine to be squashed and rearranged,
as long as the DCO and Authorship is kept somewhere in the new patch
that is applied.

Review is more important here.

Dave.


Re: Maintainers / Kernel Summit 2020 planning kick-off

2020-06-30 Thread Dave Airlie
On Sat, 16 May 2020 at 02:41, Theodore Y. Ts'o  wrote:
>
> [ Feel free to forward this to other Linux kernel mailing lists as
>   appropriate -- Ted ]
>
> This year, the Maintainers and Kernel Summit will NOT be held in
> Halifax, August 25 -- 28th, as a result of the COVID-19 pandemic.
> Instead, we will be pursuing a virtual conference format for both the
> Maintainers and Kernel Summit, around the last week of August.
>
> As in previous years, the Maintainers Summit is invite-only, where the
> primary focus will be process issues around Linux Kernel Development.
> It will be limited to 30 invitees and a handful of sponsored
> attendees.

What timezone are the conferences being held in? It impacts on what I
can attend quite heavily :-)

Dave.


Re: [Intel-wired-lan] [PATCH v2 1/1] e1000e: Undo e1000e_pm_freeze if __e1000_shutdown fails

2017-06-27 Thread Dave Airlie
On 20 June 2017 at 18:49, Daniel Vetter  wrote:
> On Wed, Jun 07, 2017 at 01:07:33AM +, Brown, Aaron F wrote:
>> > From: Intel-wired-lan [mailto:intel-wired-lan-boun...@osuosl.org] On Behalf
>> > Of Jeff Kirsher
>> > Sent: Tuesday, June 6, 2017 1:46 PM
>> > To: David Miller ; Nikula, Jani
>> > 
>> > Cc: Ursulin, Tvrtko ; daniel.vet...@ffwll.ch; 
>> > intel-
>> > g...@lists.freedesktop.org; linux-ker...@vger.kernel.org;
>> > jani.nik...@linux.intel.com; ch...@chris-wilson.co.uk; Ertman, David M
>> > ; intel-wired-...@lists.osuosl.org; dri-
>> > de...@lists.freedesktop.org; netdev@vger.kernel.org; airl...@gmail.com
>> > Subject: Re: [Intel-wired-lan] [PATCH v2 1/1] e1000e: Undo
>> > e1000e_pm_freeze if __e1000_shutdown fails
>> >
>> > On Fri, 2017-06-02 at 14:14 -0400, David Miller wrote:
>> > > From: Jani Nikula 
>> > > Date: Wed, 31 May 2017 18:50:43 +0300
>> > >
>> > > > From: Chris Wilson 
>> > > >
>> > > > An error during suspend (e100e_pm_suspend),
>> > >
>> > >  ...
>> > > > lead to complete failure:
>> > >
>> > >  ...
>> > > > The unwind failures stems from commit 2800209994f8 ("e1000e:
>> > > > Refactor PM
>> > > > flows"), but it may be a later patch that introduced the non-
>> > > > recoverable
>> > > > behaviour.
>> > > >
>> > > > Fixes: 2800209994f8 ("e1000e: Refactor PM flows")
>> > > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99847
>> > > > Cc: Tvrtko Ursulin 
>> > > > Cc: Jeff Kirsher 
>> > > > Cc: Dave Ertman 
>> > > > Cc: Bruce Allan 
>> > > > Cc: intel-wired-...@lists.osuosl.org
>> > > > Cc: netdev@vger.kernel.org
>> > > > Signed-off-by: Chris Wilson 
>> > > > [Jani: bikeshed repainted]
>> > > > Signed-off-by: Jani Nikula 
>> > >
>> > > Jeff, please make sure this gets submitted to me soon.
>> >
>> > Expect it later tonight, just finishing up testing.
>>
>> Tested-by: Aaron Brown 
>
> Hm, I seem to be blind, but I can't find it anywhere in -rc6. Does someone
> have the sha1 from Linus' git for this patch?

Guys this is a pretty serious regression, just left blowing in the
wind, is anyone responsible for e1000e?

Dave.


Re: linux-next network throughput performance regression

2015-11-08 Thread Dave Airlie
On 9 November 2015 at 13:23, David Miller  wrote:
> From: Dexuan Cui 
> Date: Mon, 9 Nov 2015 03:11:35 +
>
>>> -Original Message-
>>> From: David Miller [mailto:da...@davemloft.net]
>>> Sent: Monday, November 9, 2015 10:53
>>> To: Dexuan Cui 
>>> Cc: eric.duma...@gmail.com; d...@cumulusnetworks.com; Simon Xiao
>>> ; netdev@vger.kernel.org; Haiyang Zhang
>>> ; linux-ker...@vger.kernel.org;
>>> de...@linuxdriverproject.org
>>> Subject: Re: linux-next network throughput performance regression
>>>
>>> From: Dexuan Cui 
>>> Date: Mon, 9 Nov 2015 02:39:24 +
>>>
>>> >> Throughput on a single TCP flow for a 40G NIC can be tricky to tune.
>>> > Why is a single TCP flow trickier than multiple TCP flows?
>>> > IMO it should be easier to analyze the issue of a single TCP flow?
>>>
>>> Because a single TCP flow can only use one of the many TX queues
>>> that such modern NICs have.
>>>
>>> The single TX queue becomes the bottleneck.
>>>
>>> Whereas if you have several TCP flows, all of them can use independant
>>> TX queues on the NIC in parallel to fill the link with traffic.
>>>
>>> That's why.
>>
>> Thanks, David!
>> I understand 1 TX queue is the bottleneck (however in Simon's
>> test, TX=1 => 36.7Gb/s, TX=8 => 37.7 Gb/s, so it looks the TX=1 bottleneck
>> is not so obvious).
>> I'm just wondering how the bottleneck became much narrower with
>> recent linux-next in Simon's result (36.7 Gb/s vs. 18.2 Gb/s). IMO there
>> must be some latency somewhere.
>
> I think the whole thing here is that you misinterpreted what Eric said.
>
> He is not arguing that some regression did, or did not, happen.
>
> He instead was making the basic statement about the fact that due to
> the lack of paralellness a single stream TCP case is harder to
> optimize for high speed NICs.
>
> That is all.

We recently had a regression tracked down in a similiar area that was
because of link order.

Dave.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html