Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
Resending it at as a plain text. From: Chaitanya Kulkarni Sent: Tuesday, January 10, 2017 2:37 PM To: lsf...@lists.linux-foundation.org Cc: linux-fsde...@vger.kernel.org; linux-bl...@vger.kernel.org; linux-n...@lists.infradead.org; linux-scsi@vger.kernel.org; linux-...@vger.kernel.org Subject: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology. Hi Folks, I would like to propose a general discussion on Storage stack and device driver testing. Purpose:- - The main objective of this discussion is to address the need for a Unified Test Automation Framework which can be used by different subsystems in the kernel in order to improve the overall development and stability of the storage stack. For Example:- >From my previous experience, I've worked on the NVMe driver testing last year >and we have developed simple unit test framework (https://github.com/linux-nvme/nvme-cli/tree/master/tests). In current implementation Upstream NVMe Driver supports following subsystems:- 1. PCI Host. 2. RDMA Target. 3. Fiber Channel Target (in progress). Today due to lack of centralized automated test framework NVMe Driver testing is scattered and performed using the combination of various utilities like nvme-cli/tests, nvmet-cli, shell scripts (git://git.infradead.org/nvme-fabrics.git nvmf-selftests) etc. In order to improve overall driver stability with various subsystems, it will be beneficial to have a Unified Test Automation Framework (UTAF) which will centralize overall testing. This topic will allow developers from various subsystem engage in the discussion about how to collaborate efficiently instead of having discussions on lengthy email threads. Participants:- -- I'd like to invite developers from different subsystems to discuss an approach towards a unified testing methodology for storage stack and device drivers belongs to different subsystems. Topics for Discussion:- -- As a part of discussion following are some of the key points which we can focus on:- 1. What are the common components of the kernel used by the various subsystems? 2. What are the potential target drivers which can benefit from this approach? (e.g. NVMe, NVMe Over Fabric, Open Channel Solid State Drives etc.) 3. What are the desired features that can be implemented in this Framework? (code coverage, unit tests, stress testings, regression, generating Coccinelle reports etc.) 4. Desirable Report generation mechanism? 5. Basic performance validation? 6. Whether QEMU can be used to emulate some of the H/W functionality to create a test platform? (Optional subsystem specific) Some background about myself I'm Chaitanya Kulkarni, I worked as a team lead which was responsible for delivering scalable multiplatform Automated Test Framework for device drivers testing at HGST. It's been used for more than 1 year on Linux/Windows for unit testing/regression/performance validation of the NVMe Linux and Windows driver successfully. I've also recently started contributing to the NVMe Host and NVMe over Fabrics Target driver. Regards, -Chaitanya -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
On 01/10/2017 11:40 PM, Chaitanya Kulkarni wrote: > Resending it at as a plain text. > > From: Chaitanya Kulkarni > Sent: Tuesday, January 10, 2017 2:37 PM > To: lsf...@lists.linux-foundation.org > Cc: linux-fsde...@vger.kernel.org; linux-bl...@vger.kernel.org; > linux-n...@lists.infradead.org; linux-scsi@vger.kernel.org; > linux-...@vger.kernel.org > Subject: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing > methodology. > > > Hi Folks, > > I would like to propose a general discussion on Storage stack and device > driver testing. > > Purpose:- > - > The main objective of this discussion is to address the need for > a Unified Test Automation Framework which can be used by different subsystems > in the kernel in order to improve the overall development and stability > of the storage stack. > > For Example:- > From my previous experience, I've worked on the NVMe driver testing last year > and we > have developed simple unit test framework > (https://github.com/linux-nvme/nvme-cli/tree/master/tests). > In current implementation Upstream NVMe Driver supports following subsystems:- > 1. PCI Host. > 2. RDMA Target. > 3. Fiber Channel Target (in progress). > Today due to lack of centralized automated test framework NVMe Driver testing > is > scattered and performed using the combination of various utilities like > nvme-cli/tests, > nvmet-cli, shell scripts (git://git.infradead.org/nvme-fabrics.git > nvmf-selftests) etc. > > In order to improve overall driver stability with various subsystems, it will > be beneficial > to have a Unified Test Automation Framework (UTAF) which will centralize > overall > testing. > > This topic will allow developers from various subsystem engage in the > discussion about > how to collaborate efficiently instead of having discussions on lengthy email > threads. > > Participants:- > -- > I'd like to invite developers from different subsystems to discuss an > approach towards > a unified testing methodology for storage stack and device drivers belongs to > different subsystems. > > Topics for Discussion:- > -- > As a part of discussion following are some of the key points which we can > focus on:- > 1. What are the common components of the kernel used by the various > subsystems? > 2. What are the potential target drivers which can benefit from this > approach? > (e.g. NVMe, NVMe Over Fabric, Open Channel Solid State Drives etc.) > 3. What are the desired features that can be implemented in this Framework? > (code coverage, unit tests, stress testings, regression, generating > Coccinelle reports etc.) > 4. Desirable Report generation mechanism? > 5. Basic performance validation? > 6. Whether QEMU can be used to emulate some of the H/W functionality to > create a test > platform? (Optional subsystem specific) > > Some background about myself I'm Chaitanya Kulkarni, I worked as a team lead > which was responsible for delivering scalable multiplatform Automated Test > Framework for device drivers testing at HGST. It's been used for more than 1 > year on > Linux/Windows for unit testing/regression/performance validation of the NVMe > Linux and > Windows driver successfully. I've also recently started contributing to the > > NVMe Host and NVMe over Fabrics Target driver. > Oh, yes, please. That's a discussion I'd like to have, too. Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg) -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
On Tue, Jan 10, 2017 at 10:40:53PM +, Chaitanya Kulkarni wrote: > Resending it at as a plain text. > > From: Chaitanya Kulkarni > Sent: Tuesday, January 10, 2017 2:37 PM > To: lsf...@lists.linux-foundation.org > Cc: linux-fsde...@vger.kernel.org; linux-bl...@vger.kernel.org; > linux-n...@lists.infradead.org; linux-scsi@vger.kernel.org; > linux-...@vger.kernel.org > Subject: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing > methodology. > > > Hi Folks, > > I would like to propose a general discussion on Storage stack and device > driver testing. > > Purpose:- > - > The main objective of this discussion is to address the need for > a Unified Test Automation Framework which can be used by different subsystems > in the kernel in order to improve the overall development and stability > of the storage stack. > > For Example:- > From my previous experience, I've worked on the NVMe driver testing last year > and we > have developed simple unit test framework > (https://github.com/linux-nvme/nvme-cli/tree/master/tests). > In current implementation Upstream NVMe Driver supports following subsystems:- > 1. PCI Host. > 2. RDMA Target. > 3. Fiber Channel Target (in progress). > Today due to lack of centralized automated test framework NVMe Driver testing > is > scattered and performed using the combination of various utilities like > nvme-cli/tests, > nvmet-cli, shell scripts (git://git.infradead.org/nvme-fabrics.git > nvmf-selftests) etc. > > In order to improve overall driver stability with various subsystems, it will > be beneficial > to have a Unified Test Automation Framework (UTAF) which will centralize > overall > testing. > > This topic will allow developers from various subsystem engage in the > discussion about > how to collaborate efficiently instead of having discussions on lengthy email > threads. > > Participants:- > -- > I'd like to invite developers from different subsystems to discuss an > approach towards > a unified testing methodology for storage stack and device drivers belongs to > different subsystems. > > Topics for Discussion:- > -- > As a part of discussion following are some of the key points which we can > focus on:- > 1. What are the common components of the kernel used by the various > subsystems? > 2. What are the potential target drivers which can benefit from this > approach? > (e.g. NVMe, NVMe Over Fabric, Open Channel Solid State Drives etc.) > 3. What are the desired features that can be implemented in this Framework? > (code coverage, unit tests, stress testings, regression, generating > Coccinelle reports etc.) > 4. Desirable Report generation mechanism? > 5. Basic performance validation? > 6. Whether QEMU can be used to emulate some of the H/W functionality to > create a test > platform? (Optional subsystem specific) Well, something I was thinking about but didn't find enough time to actually implement is making a xfstestes like test suite written using sg3_utils for SCSI. This idea could very well be extented to NVMe, AHCI, blk, etc... Byte, Johannes -- Johannes Thumshirn Storage jthumsh...@suse.de+49 911 74053 689 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850 -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
On Wed, Jan 11, 2017 at 10:19:45AM +0100, Johannes Thumshirn wrote: > Well, something I was thinking about but didn't find enough time to actually > implement is making a xfstestes like test suite written using sg3_utils for > SCSI. Ronnie's libiscsi testsuite can use SG_IO for a new years now: https://github.com/sahlberg/libiscsi/tree/master/test-tool and has been very useful to find bus in various protocol implementations. > This idea could very well be extented to NVMe Chaitanya suite is doing something similar for NVMe, although the coverage is still much more limited so far. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
On 01/11/2017 10:24 AM, Christoph Hellwig wrote: > On Wed, Jan 11, 2017 at 10:19:45AM +0100, Johannes Thumshirn wrote: >> Well, something I was thinking about but didn't find enough time to actually >> implement is making a xfstestes like test suite written using sg3_utils for >> SCSI. > > Ronnie's libiscsi testsuite can use SG_IO for a new years now: > > https://github.com/sahlberg/libiscsi/tree/master/test-tool > > and has been very useful to find bus in various protocol > implementations. > >> This idea could very well be extented to NVMe > > Chaitanya suite is doing something similar for NVMe, although the > coverage is still much more limited so far. > One of the discussion points here indeed would be if we want to go in the direction of a protocol-specific testsuites (of which we already have several) or if it makes sense to move to functional testing. And if we can have a common interface / documentation on how these things should run. Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg) -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
On Tue, 2017-01-10 at 22:40 +, Chaitanya Kulkarni wrote: > Participants:- > -- > I'd like to invite developers from different subsystems to discuss an > approach towards > a unified testing methodology for storage stack and device drivers belongs to > different subsystems. > > Topics for Discussion:- > -- > As a part of discussion following are some of the key points which we can > focus on:- > 1. What are the common components of the kernel used by the various > subsystems? > 2. What are the potential target drivers which can benefit from this > approach? > (e.g. NVMe, NVMe Over Fabric, Open Channel Solid State Drives etc.) > 3. What are the desired features that can be implemented in this Framework? > (code coverage, unit tests, stress testings, regression, generating > Coccinelle reports etc.) > 4. Desirable Report generation mechanism? > 5. Basic performance validation? > 6. Whether QEMU can be used to emulate some of the H/W functionality to > create a test > platform? (Optional subsystem specific) Regarding existing test software: the SRP test software is a thorough test of the Linux block layer, SCSI core, dm-mpath driver, dm core, SRP initiator and target drivers and also of the asynchronous I/O subsystem. This test suite includes experimental support for the NVMeOF drivers. This test suite supports the rdma_rxe driver which means that an Ethernet adapter is sufficient to run these tests. Note: the focus of this test suite is the regular I/O path and device removal. This test suite neither replaces the libiscsi tests nor xfstests. See also https://github.com/bvanassche/srp-test. Bart.
Re: [Lsf-pc] [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
Hi Folks, I would like to propose a general discussion on Storage stack and device driver testing. I think its very useful and needed. Purpose:- - The main objective of this discussion is to address the need for a Unified Test Automation Framework which can be used by different subsystems in the kernel in order to improve the overall development and stability of the storage stack. For Example:- From my previous experience, I've worked on the NVMe driver testing last year and we have developed simple unit test framework (https://github.com/linux-nvme/nvme-cli/tree/master/tests). In current implementation Upstream NVMe Driver supports following subsystems:- 1. PCI Host. 2. RDMA Target. 3. Fiber Channel Target (in progress). Today due to lack of centralized automated test framework NVMe Driver testing is scattered and performed using the combination of various utilities like nvme-cli/tests, nvmet-cli, shell scripts (git://git.infradead.org/nvme-fabrics.git nvmf-selftests) etc. In order to improve overall driver stability with various subsystems, it will be beneficial to have a Unified Test Automation Framework (UTAF) which will centralize overall testing. This topic will allow developers from various subsystem engage in the discussion about how to collaborate efficiently instead of having discussions on lengthy email threads. While a unified test framework for all sounds great, I suspect that the difference might be too large. So I think that for this framework to be maintainable, it needs to be carefully designed such that we don't have too much code churn. For example we should start by classifying tests and then see where sharing is feasible: 1. basic management - I think not a lot can be shared 2. spec compliance - again, not much sharing here 3. data-verification - probably everything can be shared 4. basic performance - probably a lot can be shared 5. vectored-io - probably everything can be shared 6. error handling - I can think of some sharing that can be used. This repository can also store some useful tracing scripts (ebpf and friends) that are useful for performance analysis. So I think that for this to happen, we can start with the shared test under block/, then migrate proto specific tests into scsi/, nvme/, and then add transport specific tests so we can have something like: ├── block ├── lib ├── nvme │ ├── fabrics │ │ ├── loop │ │ └── rdma │ └── pci └── scsi ├── fc └── iscsi Thoughts? -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html