[PATCH v2] tpm: Fix typo in tpmrm class definition

2023-09-12 Thread Justin M. Forbes
Commit d2e8071bed0be ("tpm: make all 'class' structures const")
unfortunately had a typo for the name on tpmrm.

Fixes: d2e8071bed0b ("tpm: make all 'class' structures const")
Signed-off-by: Justin M. Forbes 
---
 drivers/char/tpm/tpm-chip.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 23f6f2eda84c..42b1062e33cd 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -33,7 +33,7 @@ const struct class tpm_class = {
.shutdown_pre = tpm_class_shutdown,
 };
 const struct class tpmrm_class = {
-   .name = "tmprm",
+   .name = "tpmrm",
 };
 dev_t tpm_devt;
 
-- 
2.41.0



[PATCH] tpm: Fix typo in tpmrm class definition

2023-09-11 Thread Justin M. Forbes
Commit d2e8071bed0be ("tpm: make all 'class' structures const")
unfortunately had a typo for the name on tpmrm.

Fixes: d2e8071bed0b ("tpm: make all 'class' structures const")
Signed-off-by: Justin M. Forbes 
---
 drivers/char/tpm/tpm-chip.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 23f6f2eda84c..42b1062e33cd 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -33,7 +33,7 @@ const struct class tpm_class = {
.shutdown_pre = tpm_class_shutdown,
 };
 const struct class tpmrm_class = {
-   .name = "tmprm",
+   .name = "tpmrm",
 };
 dev_t tpm_devt;

-- 
2.41.0



[PATCH] tools/kvm: fix top level makefile

2017-04-11 Thread Justin M. Forbes
The top level tools/Makefile includes kvm_stat as a target in help, but
the actual target is missing.

Signed-off-by: Justin M. Forbes 
---
 tools/Makefile | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/Makefile b/tools/Makefile
index 00caacd..c8a90d0 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -86,10 +86,13 @@ tmon: FORCE
 freefall: FORCE
$(call descend,laptop/$@)

+kvm_stat: FORCE
+   $(call descend,kvm/$@)
+
 all: acpi cgroup cpupower gpio hv firewire lguest \
perf selftests turbostat usb \
virtio vm net x86_energy_perf_policy \
-   tmon freefall objtool
+   tmon freefall objtool kvm_stat

 acpi_install:
$(call descend,power/$(@:_install=),install)
-- 
2.9.3



Re: [PATCH 0/2] block: loop: fix stacked loop and performance regression

2015-05-05 Thread Justin M. Forbes
On Tue, 2015-05-05 at 19:49 +0800, Ming Lei wrote:
> Hi,
> 
> The 1st patch convers to per-device workqueue because loop devices
> can be stacked.
> 
> The 2nd patch decreases max active works as 16, so that fedora 22's
> boot performance regression can be fixed.
> 
>  drivers/block/loop.c | 30 ++
>  drivers/block/loop.h |  1 +
>  2 files changed, 15 insertions(+), 16 deletions(-)
> 
> 
> Thanks,
> Ming Lei
> 

Tested with Fedora 22 ISOs, these still solve the problem for us.

Tested-by: Justin M. Forbes 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: loop block-mq conversion scalability issues

2015-04-27 Thread Justin M. Forbes
On Sun, 2015-04-26 at 23:27 +0800, Ming Lei wrote:
> Hi Justin,
> 
> On Fri, 24 Apr 2015 16:46:02 -0500
> "Justin M. Forbes"  wrote:
> 
> > On Fri, 2015-04-24 at 10:59 +0800, Ming Lei wrote:
> > > Hi Justin,
> > > 
> > > Thanks for the report.
> > > 
> > > On Thu, 23 Apr 2015 16:04:10 -0500
> > > "Justin M. Forbes"  wrote:
> > > 
> > > > The block-mq conversion for loop in 4.0 kernels is showing us an
> > > > interesting scalability problem with live CDs (ro, squashfs).  It was
> > > > noticed when testing the Fedora beta that the more CPUs a liveCD image
> > > > was given, the slower it would boot. A 4 core qemu instance or bare
> > > > metal instance took more than twice as long to boot compared to a single
> > > > CPU instance.  After investigating, this came directly to the block-mq
> > > > conversion, reverting these 4 patches will return performance. More
> > > > details are available at
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1210857
> > > > I don't think that reverting the patches is the ideal solution so I am
> > > > looking for other options.  Since you know this code a bit better than I
> > > > do I thought I would run it by you while I am looking as well.
> > > 
> > > I can understand the issue because the default @max_active for
> > > alloc_workqueue() is quite big(512), which may cause too much
> > > context switchs, then loop I/O performance gets decreased.
> > > 
> > > Actually I have written the kernel dio/aio based patch for decreasing
> > > both CPU and memory utilization without sacrificing I/O performance,
> > > and I will try to improve and push the patch during this cycle and hope
> > > it can be merged(kernel/aio.c change is dropped, and only fs change is
> > > needed on fs/direct-io.c).
> > > 
> > > But the following change should help for your case, could you test it?
> > > 
> > > ---
> > > diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> > > index c6b3726..b1cb41d 100644
> > > --- a/drivers/block/loop.c
> > > +++ b/drivers/block/loop.c
> > > @@ -1831,7 +1831,7 @@ static int __init loop_init(void)
> > >   }
> > >  
> > >   loop_wq = alloc_workqueue("kloopd",
> > > - WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0);
> > > + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 32);
> > >   if (!loop_wq) {
> > >   err = -ENOMEM;
> > >   goto misc_out;
> > > 
> > Patch tested, it made things work (I gave up after 5 minutes and boot
> > still seemed hung). I also tried values of 1, 16, 64, and 128).
> > Everything below 128 was much worse than the current situation. Setting
> > it at 128 seemed about the same as booting without the patch. I can do
> > some more testing over the weekend, but I don't think this is the
> > correct solution.
> 
> For describing the problem easily, follows the fedora live CD file structure 
> first:
> 
> Fedora-Live-Workstation-x86_64-22_Beta-TC8.iso
>   =>LiveOS/
>   squashfs.img
>   =>LiveOS/
>   ext3fs.img
>  
> Looks at least two reasons are related with the problem:
> 
> - not like other filesyststems(such as ext4), squashfs is a bit special, and
> I observed that increasing I/O jobs to access file in squashfs can't improve
> I/O performance at all, but it can for ext4
> 
> - nested loop: both squashfs.img and ext3fs.img are mounted as loop block
> 
> One key idea in the commit b5dd2f60(block: loop: improve performance via 
> blk-mq)
> is to submit I/O concurrently from more than one context(worker), like posix
> AIO style. Unfortunately this way can't improve I/O performance for squashfs,
> and with extra cost of kworker threads, and nested loop makes it worse. 
> Meantime,
> during booting, there are lots of concurrent tasks requiring CPU, so the high
> priority kworker threads for loop can affect other boot tasks, then booting 
> time
> is increased.
> 
> I think it may improve the problem by removing the nest loop, such as
> extract files in ext3fs.img to squashfs.img.
> 
> > I would be interested in testing your dio/aio patches as well though.
> 
> squashfs doesn't support dio, so the dio/aio patch can't help much, but
> the motivation for introducing dio/aio is really for avoiding double cache
> and decreasing CPU utilization[1].
> 
&g

Re: loop block-mq conversion scalability issues

2015-04-24 Thread Justin M. Forbes
On Fri, 2015-04-24 at 10:59 +0800, Ming Lei wrote:
> Hi Justin,
> 
> Thanks for the report.
> 
> On Thu, 23 Apr 2015 16:04:10 -0500
> "Justin M. Forbes"  wrote:
> 
> > The block-mq conversion for loop in 4.0 kernels is showing us an
> > interesting scalability problem with live CDs (ro, squashfs).  It was
> > noticed when testing the Fedora beta that the more CPUs a liveCD image
> > was given, the slower it would boot. A 4 core qemu instance or bare
> > metal instance took more than twice as long to boot compared to a single
> > CPU instance.  After investigating, this came directly to the block-mq
> > conversion, reverting these 4 patches will return performance. More
> > details are available at
> > https://bugzilla.redhat.com/show_bug.cgi?id=1210857
> > I don't think that reverting the patches is the ideal solution so I am
> > looking for other options.  Since you know this code a bit better than I
> > do I thought I would run it by you while I am looking as well.
> 
> I can understand the issue because the default @max_active for
> alloc_workqueue() is quite big(512), which may cause too much
> context switchs, then loop I/O performance gets decreased.
> 
> Actually I have written the kernel dio/aio based patch for decreasing
> both CPU and memory utilization without sacrificing I/O performance,
> and I will try to improve and push the patch during this cycle and hope
> it can be merged(kernel/aio.c change is dropped, and only fs change is
> needed on fs/direct-io.c).
> 
> But the following change should help for your case, could you test it?
> 
> ---
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index c6b3726..b1cb41d 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1831,7 +1831,7 @@ static int __init loop_init(void)
>   }
>  
>   loop_wq = alloc_workqueue("kloopd",
> - WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0);
> + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 32);
>   if (!loop_wq) {
>   err = -ENOMEM;
>   goto misc_out;
> 
Patch tested, it made things work (I gave up after 5 minutes and boot
still seemed hung). I also tried values of 1, 16, 64, and 128).
Everything below 128 was much worse than the current situation. Setting
it at 128 seemed about the same as booting without the patch. I can do
some more testing over the weekend, but I don't think this is the
correct solution.
I would be interested in testing your dio/aio patches as well though.

Thanks,
Justin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


loop block-mq conversion scalability issues

2015-04-23 Thread Justin M. Forbes
The block-mq conversion for loop in 4.0 kernels is showing us an
interesting scalability problem with live CDs (ro, squashfs).  It was
noticed when testing the Fedora beta that the more CPUs a liveCD image
was given, the slower it would boot. A 4 core qemu instance or bare
metal instance took more than twice as long to boot compared to a single
CPU instance.  After investigating, this came directly to the block-mq
conversion, reverting these 4 patches will return performance. More
details are available at
https://bugzilla.redhat.com/show_bug.cgi?id=1210857
I don't think that reverting the patches is the ideal solution so I am
looking for other options.  Since you know this code a bit better than I
do I thought I would run it by you while I am looking as well.

Thanks,
Justin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 00/19] 3.10.1-stable review

2013-07-12 Thread Justin M. Forbes
On Fri, Jul 12, 2013 at 04:28:20PM -0400, Steven Rostedt wrote:
> 
> I would suspect that machines that allow unprivileged users would be
> running distro kernels, and not the latest release from Linus, and thus
> even a bug that "can allow an unprivileged user to crash the kernel" may
> still be able to sit around for a month before being submitted.
> 
But distros *do* ship the latest release from Linus. Fedora is often
shipping .1 releases, and sometimes .0.  This seems to be getting more
difficult though as more and more fixes have been left for stable to fix
and the Linus release contains a number of known regressions.
We know about those regressions not just from following lists, but because
we have users running rawhide kernels which are snapshots of Linus' tree
almost daily.  They see the regressions and complain.  So yeah, there are
machines out there running Linus' latest tree.

Justin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] [PATCH/RFC] Fix xsave bug on older Xen hypervisors

2012-09-07 Thread Justin M. Forbes
On Fri, 2012-09-07 at 16:44 +0100, Jan Beulich wrote:
> >>> On 07.09.12 at 16:22, "Justin M. Forbes"  wrote:
> > On Fri, Sep 07, 2012 at 03:02:29PM +0100, Jan Beulich wrote:
> >> >>> On 07.09.12 at 15:21, Stefan Bader  wrote:
> >> > On 07.09.2012 14:33, Jan Beulich wrote:
> >> >>>>> On 07.09.12 at 13:40, Stefan Bader  
> >> >>>>> wrote:
> >> >>> When writing unsupported flags into CR4 (for some time the
> >> >>> xen_write_cr4 function would refuse to do anything at all)
> >> >>> older Xen hypervisors (and patch can potentially be improved
> >> >>> by finding out what older means in version numbers) would
> >> >>> crash the guest.
> >> >>>
> >> >>> Since Amazon EC2 would at least in the past be affected by that,
> >> >>> Fedora and Ubuntu were carrying a hack that would filter out
> >> >>> X86_CR4_OSXSAVE before writing to CR4. This would affect any
> >> >>> PV guest, even those running on a newer HV.
> >> >>>
> >> >>> And this recently caused trouble because some user-space was
> >> >>> only partially checking (or maybe only looking at the cpuid
> >> >>> bits) and then trying to use xsave even though the OS support
> >> >>> was not set.
> >> >>>
> >> >>> So I came up with a patch that would
> >> >>> - limit the work-around to certain Xen versions
> >> >>> - prevent the write to CR4 by unsetting xsave and osxsave in
> >> >>>   the cpuid bits
> >> >>>
> >> >>> Doing things that way may actually allow this to be acceptable
> >> >>> upstream, so I am sending it around, now.
> >> >>> It probably could be improved when knowing the exact version
> >> >>> to test for but otherwise should allow to work around the guest
> >> >>> crash while not preventing xsave on Xen 4.x and newer hosts.
> >> >> 
> >> >> Before considering a hack like this, I'd really like to see evidence
> >> >> of the described behavior with an upstream kernel (i.e. not one
> >> >> with that known broken hack patched in, which has never been
> >> >> upstream afaict).
> >> > 
> >> > This is the reason I wrote that Fedora and Ubuntu were carrying it. It 
> > never 
> >> > has
> >> > been send upstream (the other version) because it would filter the CR4 
> > write 
> >> > for
> >> > any PV guest regardless of host version.
> >> 
> >> But iirc that bad patch is a Linux side one (i.e. you're trying to fix
> >> something upstream that isn't upstream)?
> >> 
> > Right, so the patch that this improves upon, and that Fedora and Ubuntu are
> > currently carrying is not upstream because:
> > 
> > a) It's crap, it cripples upstream xen users, but doesn't impact RHEL xen
> > users because xsave was never supported there.
> > 
> > b) The hypervisor was patched to make it unnecessary quite some time ago,
> > and we hoped EC2 would eventually pick up that correct patch and we could
> > drop the crap kernel patch.
> > 
> > Unfortunately this has not happened. We are at a point where EC2 really is
> > a quirk that has to be worked around. Distros do not want to maintain
> > a separate EC2 build of the kernel, so the easiest way is to cripple
> > current upstream xen users.  This quirk is unfortunately the best possible
> > solution.  Having it upstream also makes it possible for any user to build
> > an upstream kernel that will run on EC2 without having to dig a random
> > patch out of a vendor kernel.
> 
> All of this still doesn't provide evidence that a plain upstream
> kernel is actually having any problems in the first place. Further,
> if you say EC2 has a crippled hypervisor patch - is that patch
> available for looking at somewhere?

Yes, I can verify that a plain upstream kernel has problems in the first
place, which is why we are carrying a patch to simply disable xsave all
together in the pv guest.
EC2 is not carrying a patch to cripple the hypervisor, there was an old
xen bug that makes all this fail.  The correct fix for that bug is to
patch the hypervisor, but they have not done so. Upstream xen has had
the fix for quite some time, but that doesn't change the fact that a lot
of xen guest usage these days is on EC2.  This is no different than
putting in a quirk to work around a firmware bug in common use.

Justin


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] [PATCH/RFC] Fix xsave bug on older Xen hypervisors

2012-09-07 Thread Justin M. Forbes
On Fri, Sep 07, 2012 at 03:02:29PM +0100, Jan Beulich wrote:
> >>> On 07.09.12 at 15:21, Stefan Bader  wrote:
> > On 07.09.2012 14:33, Jan Beulich wrote:
> > On 07.09.12 at 13:40, Stefan Bader  wrote:
> >>> When writing unsupported flags into CR4 (for some time the
> >>> xen_write_cr4 function would refuse to do anything at all)
> >>> older Xen hypervisors (and patch can potentially be improved
> >>> by finding out what older means in version numbers) would
> >>> crash the guest.
> >>>
> >>> Since Amazon EC2 would at least in the past be affected by that,
> >>> Fedora and Ubuntu were carrying a hack that would filter out
> >>> X86_CR4_OSXSAVE before writing to CR4. This would affect any
> >>> PV guest, even those running on a newer HV.
> >>>
> >>> And this recently caused trouble because some user-space was
> >>> only partially checking (or maybe only looking at the cpuid
> >>> bits) and then trying to use xsave even though the OS support
> >>> was not set.
> >>>
> >>> So I came up with a patch that would
> >>> - limit the work-around to certain Xen versions
> >>> - prevent the write to CR4 by unsetting xsave and osxsave in
> >>>   the cpuid bits
> >>>
> >>> Doing things that way may actually allow this to be acceptable
> >>> upstream, so I am sending it around, now.
> >>> It probably could be improved when knowing the exact version
> >>> to test for but otherwise should allow to work around the guest
> >>> crash while not preventing xsave on Xen 4.x and newer hosts.
> >> 
> >> Before considering a hack like this, I'd really like to see evidence
> >> of the described behavior with an upstream kernel (i.e. not one
> >> with that known broken hack patched in, which has never been
> >> upstream afaict).
> > 
> > This is the reason I wrote that Fedora and Ubuntu were carrying it. It 
> > never 
> > has
> > been send upstream (the other version) because it would filter the CR4 
> > write 
> > for
> > any PV guest regardless of host version.
> 
> But iirc that bad patch is a Linux side one (i.e. you're trying to fix
> something upstream that isn't upstream)?
> 
Right, so the patch that this improves upon, and that Fedora and Ubuntu are
currently carrying is not upstream because:

a) It's crap, it cripples upstream xen users, but doesn't impact RHEL xen
users because xsave was never supported there.

b) The hypervisor was patched to make it unnecessary quite some time ago,
and we hoped EC2 would eventually pick up that correct patch and we could
drop the crap kernel patch.

Unfortunately this has not happened. We are at a point where EC2 really is
a quirk that has to be worked around. Distros do not want to maintain
a separate EC2 build of the kernel, so the easiest way is to cripple
current upstream xen users.  This quirk is unfortunately the best possible
solution.  Having it upstream also makes it possible for any user to build
an upstream kernel that will run on EC2 without having to dig a random
patch out of a vendor kernel.

Justin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


3.5.x boot hang after conflicting fb hw usage vs VESA VGA - removing generic driver

2012-08-17 Thread Justin M. Forbes
for , we have verified cases on inteldrmfb, radeondrmfb, and
cirrusdrmfb.

This is the last message displayed before the system hangs.  This seems
to be hitting a large number of users in Fedora, though certainly not
everyone.  This started happening with the 3.5 updates, and is still an
issue.  It appears to be a race condition, because various things have
allowed boot to continue for some users, though there is no clear work
around. Has anyone else run across this?  Any ideas.  For more
background we have the following bugs:

inteldrmfb:
https://bugzilla.redhat.com/show_bug.cgi?id=843826

radeondrmfb:
https://bugzilla.redhat.com/show_bug.cgi?id=845745

cirrusdrmfb :
https://bugzilla.redhat.com/show_bug.cgi?id=843860

It should be noted that the conflicting fb hw usage message is not new,
it has been around for a while, but this is the last message seen before
the hang.

Thanks,
Justin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 22/27] quicklist: Set tlb->need_flush if pages are remaining in quicklist 0

2008-02-01 Thread Justin M. Forbes
On Fri, 2008-02-01 at 17:30 -0800, Christoph Lameter wrote:
> On Fri, 1 Feb 2008, Justin M. Forbes wrote:
> 
> > 
> > On Fri, 2008-02-01 at 16:39 -0800, Christoph Lameter wrote:
> > > NO! Wrong fix. Was dropped from mainline.
> > 
> > What is the right fix for the OOM issues with 2.6.22? Perhaps
> > http://marc.info/?l=linux-mm&m=119973653803451&w=2 should be added to
> > the queue in its place?  The OOM issue in 2.6.22 is real, and should be
> > addressed.
> 
> Indeed that is the right fix.

Greg, could we get that one added? We are already shipping it as our
users have run into the OOM problem with 2.6.22.16 without this patch.

Justin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 22/27] quicklist: Set tlb->need_flush if pages are remaining in quicklist 0

2008-02-01 Thread Justin M. Forbes

On Fri, 2008-02-01 at 16:39 -0800, Christoph Lameter wrote:
> NO! Wrong fix. Was dropped from mainline.

What is the right fix for the OOM issues with 2.6.22? Perhaps
http://marc.info/?l=linux-mm&m=119973653803451&w=2 should be added to
the queue in its place?  The OOM issue in 2.6.22 is real, and should be
addressed.

Justin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 00/20] 2.6.22-stable review

2007-08-21 Thread Justin M. Forbes
On Mon, Aug 20, 2007 at 11:52:10PM -0700, Greg KH wrote:
> This is the start of the stable review cycle for the 2.6.22.5 release.

No roll up patch for this one?

Justin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [11/11] x86_64: TASK_SIZE fixes for compatibility mode processes

2005-07-15 Thread Justin M. Forbes
On Thu, Jul 14, 2005 at 09:45:17AM -0700, Siddha, Suresh B wrote:
> On Wed, Jul 13, 2005 at 08:49:47PM +0200, Andi Kleen wrote:
> > On Wed, Jul 13, 2005 at 11:44:26AM -0700, Greg KH wrote:
> > > -stable review patch.  If anyone has any objections, please let us know.
> > 
> > I think the patch is too risky for stable. I had even my doubts
> > for mainline.
> 
> hmm.. Main reason why Andrew posted this for stable series is because of
> the memory leak issue mentioned in the patch changeset comments...
> 
I would say if Andi has concerns for this stable series, we should indeed
leave it out.  That said, I will be testing this patch a bit further
myself, and because it does address a real memory leak issue, we should
consider it or another fix for stable 2.6.12.4.

Justin M. Forbes
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


2.6.Stable and EXTRAVERSION

2005-03-09 Thread Justin M. Forbes
With the new stable series kernels, the .x versioning is being added to
EXTRAVERSION.  This has traditionally been a space for local modification.
I know several distributions are using EXTRAVERSION for build numbers,
platform and assorted other information to differentiate their kernel
releases.
I would propose that the new stable series kernels move the .x version
information somewhere more official.  I certainly do not mind throwing
together a patch to support DOTVERSION or what ever people want to call it.
Is anyone opposed to such a change?

Thanks,
Justin M. Forbes
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/