Hi David,
> $ evmtest runtest example_test -e /bin/bash
> [*] Starting test: example_test
> [*] TEST: PASSED
> Example 1a: successful verbose example test output
> $ evmtest runtest example_test -e /bin/bash -v
...
> Changelog:
...
> * checkbashishms compliant
Not yet :). I noticed using
source
From: David Jacobson
As the Linux integrity subsystem matures and new features are added,
the number of kernel configuration options (Kconfig) and methods for
loading policies have increased. Regression testing of new and existing
features is needed to ensure correct behavior.
The Linux Test
Hi,
I will have a look to patches.
Thanks,
Dmitry
On Tue, Aug 14, 2018 at 9:34 PM James Morris wrote:
>
> On Tue, 14 Aug 2018, David Jacobson wrote:
>
> > This patchset introduces evmtest — a stand alone tool for regression
> > testing IMA.
>
> Nice!
>
> I usu
On Tue, 14 Aug 2018, David Jacobson wrote:
> This patchset introduces evmtest — a stand alone tool for regression
> testing IMA.
Nice!
I usually run the SELinux testsuite as a general sanity check of LSM
before pushing to Linus, and I'll also run this once it's merged.
--
James Morris
regression
testing IMA. evmtest can be used to validate individual behaviors by
exercising execve, kexec, module load, and other LSM hooks. evmtest
uses a combination of invalid signatures, invalid hashes, and unsigned
files to check that IMA-Appraisal catches all cases a running policy has
set
No problem thank you sir
发自我的 iPhone
> 在 2017年8月18日,16:11,Omar Sandoval 写道:
>
>> On Tue, Jun 27, 2017 at 12:01:47PM +0800, James Wang wrote:
>> Add a regression testing for loop device. when an unbound device
>> be close that take too long time. kernel will consu
On Tue, Jun 27, 2017 at 12:01:47PM +0800, James Wang wrote:
> Add a regression testing for loop device. when an unbound device
> be close that take too long time. kernel will consume serveral orders
> of magnitude more wall time than it does for a mounted device.
James, sorry I forgot a
Add a regression testing for loop device. when an unbound device
be close that take too long time. kernel will consume serveral orders
of magnitude more wall time than it does for a mounted device.
Signed-off-by: James Wang
---
tests/loop/002 | 63
On 06/27/2017 02:58 AM, Omar Sandoval wrote:
> Hi, James, thanks for sending this in. Sorry for the delay, I've been
> out of the office for a couple of weeks. A few comments below.
>
> On Thu, Jun 08, 2017 at 08:28:12PM +0800, James Wang wrote:
>> Add a regression testin
Hi, James, thanks for sending this in. Sorry for the delay, I've been
out of the office for a couple of weeks. A few comments below.
On Thu, Jun 08, 2017 at 08:28:12PM +0800, James Wang wrote:
> Add a regression testing for loop device. when an unbound device
> be close that take to
On 06/08/2017 06:28 AM, James Wang wrote:
> Add a regression testing for loop device. when an unbound device
> be close that take too long time. kernel will consume serveral orders
> of magnitude more wall time than it does for a mounted device.
Thanks a lot for taking the time to turn
Add a regression testing for loop device. when an unbound device
be close that take too long time. kernel will consume serveral orders
of magnitude more wall time than it does for a mounted device.
Signed-off-by: James Wang
---
tests/loop/002 | 77
Based on recent bugs I have seen in Linux, I recommend to those
involved in regression testing of linux build tree to add a debugger
test to regression for kgdb, kdb, and mdb as a part of your standard
tests to flush out bugs in various drivers and subsystems.
The test is simple. Load one of the
The hugetlb selftests provide minimal coverage. Have run script
point people at libhugetlbfs for better regression testing.
Signed-off-by: Mike Kravetz
---
tools/testing/selftests/vm/run_vmtests | 4
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/vm/run_vmtests
b
Em Tue, Sep 25, 2012 at 10:47:48PM +0900, Namhyung Kim escreveu:
> 2012-09-25 (화), 10:30 -0300, Arnaldo Carvalho de Melo:
> > Em Tue, Sep 25, 2012 at 09:59:02PM +0900, Namhyung Kim escreveu:
> > > Now I'm thinking of making it build-time test so that it can be executed
> > > by make when specific a
2012-09-25 (화), 10:30 -0300, Arnaldo Carvalho de Melo:
> Em Tue, Sep 25, 2012 at 09:59:02PM +0900, Namhyung Kim escreveu:
> > Now I'm thinking of making it build-time test so that it can be executed
> > by make when specific argument is given - e.g. make C=1 ?
>
> I think there is room for a 'make
Em Tue, Sep 25, 2012 at 09:59:02PM +0900, Namhyung Kim escreveu:
> 2012-09-25 (화), 08:05 -0300, Arnaldo Carvalho de Melo:
> > Em Tue, Sep 25, 2012 at 10:25:13AM +0900, Namhyung Kim escreveu:
> > > On Mon, 24 Sep 2012 13:02:39 -0300, Arnaldo Carvalho de Melo wrote:
> > > > Em Mon, Sep 24, 2012 at 11
Eric W. Biederman wrote:
> Yes user-mode linux
> could help here (you could stress test the core kernel without worry
> that when it crashes your machine will crash as well).
A similar approach can be used for very detailed tests of specific
subsystems. E.g. that's what we've started doing, kin
On Fri, 23 Mar 2001, Horst von Brand wrote:
> Jonathan Morton <[EMAIL PROTECTED]> said:
> > >- automated heavy stress testing
>
> > This would be an interesting one to me, from a benchmarking POV. I'd like
> > to know what my hardware can really do, for one thing - it's all very well
> > saying
Jonathan Morton <[EMAIL PROTECTED]> said:
> >- automated heavy stress testing
> This would be an interesting one to me, from a benchmarking POV. I'd like
> to know what my hardware can really do, for one thing - it's all very well
> saying this box can do X Whetstones and has a 100Mbit NIC, but
[EMAIL PROTECTED] writes:
> Hi. I was wondering if there has been any discussion of kernel
> regression testing. Wouldn't it be great if we didn't have to depend
> on human testers to verify every change didn't break something?
There is a some truth to this. Howeve
thousands of new machines
on board for continuous testing, keeping it a community effort.
Not that the testing work of IBM, SGI, Red Hat, et. al. is
unappreciated, but a distributed, open, regression testing package
sort of fits the Linux philosophy and model...
Torrey Hoffman
-
To un
>- automated heavy stress testing
This would be an interesting one to me, from a benchmarking POV. I'd like
to know what my hardware can really do, for one thing - it's all very well
saying this box can do X Whetstones and has a 100Mbit NIC, but it's a much
more solid thing to be able to say "my
} On 22 Mar 2001 [EMAIL PROTECTED] wrote:
}
} > Hi. I was wondering if there has been any discussion of kernel
} > regression testing. Wouldn't it be great if we didn't have to depend
} > on human testers to verify every change didn't break something?
} >
} >
We have a start for PPC. It has the title "Regression Tester" but is
actually a "compiles and boots tester". The aim is a automated regression
test.
Take a look at http://altus.drgw.net/
It pulls directly from our BitKeeper archive every time we push a change
and goes through the build targete
usand square feet of space, and a good budget....
> > Any takers?
>
> SGI is working on regression testing for Linux. We have released some
> of our tests and utilities under the Linux Test Project. IMHO, a few
> hundred tests aren't enough. I need to make another big push wit
On 22 Mar 2001 [EMAIL PROTECTED] wrote:
> Hi. I was wondering if there has been any discussion of kernel
> regression testing. Wouldn't it be great if we didn't have to depend
> on human testers to verify every change didn't break something?
This is definately a great
On Thu, Mar 22, 2001 at 10:13:04AM -0500, Wade Hampton wrote:
> [EMAIL PROTECTED] wrote:
> > Hi. I was wondering if there has been any discussion of kernel
> > regression testing. Wouldn't it be great if we didn't have to depend
> > on human testers to verify ever
[EMAIL PROTECTED] wrote:
>
> Hi. I was wondering if there has been any discussion of kernel
> regression testing. Wouldn't it be great if we didn't have to depend
> on human testers to verify every change didn't break something?
IMHO, much of the strength of Linux i
> Regression testing __is__ what happens when 10,000 testers independently
> try to break the software!
Nope. Thats stress testing and a limited amount of coverage testing.
> Canned so-called "regression-test" schemes will fail to test at least
> 90 percent of the code p
On Thu, 22 Mar 2001, Alan Cox wrote:
> > Regression testing __is__ what happens when 10,000 testers independently
> > try to break the software!
>
> Nope. Thats stress testing and a limited amount of coverage testing.
>
> > Canned so-called "regression-test&quo
>>>>> "Richard" == Richard B Johnson <[EMAIL PROTECTED]> writes:
Richard> On 22 Mar 2001 [EMAIL PROTECTED] wrote:
>> Hi. I was wondering if there has been any discussion of kernel
>> regression testing. Wouldn't it be great if
On 22 Mar 2001 [EMAIL PROTECTED] wrote:
> Hi. I was wondering if there has been any discussion of kernel
> regression testing. Wouldn't it be great if we didn't have to depend
> on human testers to verify every change didn't break something?
>
> OK, I'll
Hi. I was wondering if there has been any discussion of kernel
regression testing. Wouldn't it be great if we didn't have to depend
on human testers to verify every change didn't break something?
OK, I'll admit I haven't given this a lot of thought. What I'm
wond
34 matches
Mail list logo