On Tue, Jun 06, 2017 at 01:44:32PM +0900, Tetsuo Handa wrote:
> Igor Stoppa wrote:
> > +int pmalloc_protect_pool(struct pmalloc_pool *pool)
> > +{
> > +   struct pmalloc_node *node;
> > +
> > +   if (!pool)
> > +           return -EINVAL;
> > +   mutex_lock(&pool->nodes_list_mutex);
> > +   hlist_for_each_entry(node, &pool->nodes_list_head, nodes_list) {
> > +           unsigned long size, pages;
> > +
> > +           size = WORD_SIZE * node->total_words + HEADER_SIZE;
> > +           pages = size / PAGE_SIZE;
> > +           set_memory_ro((unsigned long)node, pages);
> > +   }
> > +   pool->protected = true;
> > +   mutex_unlock(&pool->nodes_list_mutex);
> > +   return 0;
> > +}
> 
> As far as I know, not all CONFIG_MMU=y architectures provide
> set_memory_ro()/set_memory_rw(). You need to provide fallback for
> architectures which do not provide set_memory_ro()/set_memory_rw()
> or kernels built with CONFIG_MMU=n.

I think we'll just need to generalize CONFIG_STRICT_MODULE_RWX and/or
ARCH_HAS_STRICT_MODULE_RWX so there is a symbol to key this off.

Reply via email to