Prabhu, We have a number of users who over the users have made something like this work. In these cases they're building the module from source. If this is a driver for which you do not have any of the source code it will likely be a challenge.
http://www.emergingstack.com/2016/01/10/Nvidia-GPU-plus-CoreOS-plus-Docker-plus-TensorFlow.html https://github.com/Avalanche-io/coreos-nvidia http://tleyden.github.io/blog/2014/11/04/coreos-with-nvidia-cuda-gpu-drivers/ Traun (the gentleman who did the last post) has also done great work in making Couchbase "cloud native" atop Kubernetes as well. In terms of having the background process talk to the kernel module, that shouldn't be an issue. The most important piece will be to identify which kernel capabilities are needed to do this (http://man7.org/linux/man-pages/man7/capabilities.7.html). For the container which will load the module it will need CAP_SYS_MODULE. For the container running the SDK process you would be better equipped to answer which capabilities are needed. Hope this helps! --Brian 'redbeard' Harrington On Thursday, May 12, 2016 at 11:58:59 AM UTC-7, Prabhu Balakannan wrote: > > Experts, > I would like to get inputs on enabling CoreOS on our networking device. > We currently have networking device (Switch/Router) running proprietary > OS. The current proprietary network OS we are running mainly interacts with > hardware that contains switching and routing ASICs for programming > perspective. > We want to understand whether we can use CoreOS and include our KLM > kernel module to control these devices in the hardware for programming and > also we have a background SDK process that interacts with the KLM. Can the > background SDK process run directly on CoreOS instead of container/docker? > > -thanks > Prabhu. >
