* Sasha Levin <levinsasha...@gmail.com> wrote:

> @@ -55,41 +56,53 @@ static const char *to_direction(u8 is_write)
>  bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void 
> (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write))
>  {
>       struct mmio_mapping *mmio;
> +     int ret;
>  
>       mmio = malloc(sizeof(*mmio));
>       if (mmio == NULL)
>               return false;
>  
> +     br_write_lock();
>       *mmio = (struct mmio_mapping) {
>               .node = RB_INT_INIT(phys_addr, phys_addr + phys_addr_len),
>               .kvm_mmio_callback_fn = kvm_mmio_callback_fn,
>       };

The initialization here does not use any global state AFAICS so it 
does not need the write lock, right?

>  
> -     return mmio_insert(&mmio_tree, mmio);
> +     ret = mmio_insert(&mmio_tree, mmio);
> +     br_write_unlock();

Shouldnt mmio_insert() thus have the write_lock()/unlock() sequence?

>  bool kvm__deregister_mmio(u64 phys_addr)
>  {
>       struct mmio_mapping *mmio;
>  
> +     br_write_lock();
>       mmio = mmio_search_single(&mmio_tree, phys_addr);
>       if (mmio == NULL)
>               return false;

Here we leak the write lock!

>  bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 
> is_write)
>  {
> -     struct mmio_mapping *mmio = mmio_search(&mmio_tree, phys_addr, len);
> +     struct mmio_mapping *mmio;
> +
> +     br_read_lock();
> +     mmio = mmio_search(&mmio_tree, phys_addr, len);
>  
>       if (mmio)
>               mmio->kvm_mmio_callback_fn(phys_addr, data, len, is_write);
>       else
>               fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length 
> %u)\n",
>                       to_direction(is_write), phys_addr, len);
> +     br_read_unlock();
>  
>       return true;
>  }

Yummie, scalability here we come! :-)

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to