On Wed, 2026-01-07 at 21:15 +0000, Lorenzo Stoakes wrote: > On Wed, Jan 07, 2026 at 11:18:52AM -0800, Dave Hansen wrote: > > On 1/7/26 10:12, Lorenzo Stoakes wrote: > > ... > > > I know Linus had the cute interpretation of it 'just being > > > another tool' but never before have people been able to do this. > > > > I respect your position here. But I'm not sure how to reconcile: > > > > LLMs are just another tool > > and > > LLMs are not just another tool > > > > :) > > Well I'm not asking you to reconcile that, I'm providing my point of > view which disagrees with the first position and makes a case for the > second. Isn't review about feedback both positive and negative? > > Obviously if this was intended to simply inform the community of the > committee's decision then apologies for misinterpreting it. > > I would simply argue that LLMs are not another tool on the basis of > the drastic negative impact its had in very many areas, for which you > need only take a cursory glance at the world to observe. > > Thinking LLMs are 'just another tool' is to say effectively that the > kernel is immune from this. Which seems to me a silly position.
All tools are double edged and the better a tool is the more problematic its harmful uses become but people often use them anyway because of the beneficial uses. You don't for instance classify chainsaws as not another tool because they can be used to deforest the Amazon. All the document is saying is that we start from the place of treating AI like any other tool and, like any other tool, if it proves to cause way more problems than it solves, then we can then move on to other things. There are other tools we've tried and abandoned (like compiling the kernel with c++), so this really isn't any different. Regards, James

