Re: [Qemu-devel] virtio-9p: hot-plug/unplug about virtio-9p device

2018-01-12 Thread Greg Kurz
On Thu, 11 Jan 2018 10:23:32 +0800
sochin.jiang  wrote:

> On 2018/1/8 18:10, Greg Kurz wrote:
> > On Tue, 19 Dec 2017 13:41:12 +0800
> > sochin.jiang  wrote:
> >  
> >>  Hi, guys.
> >>
> >> I'm looking for the hot-plug/unplug features of virtio-9p device 
> >> recently, and found there's a lack of support.
> >>
> >> I am wondering why ? Is there a reason. Actually, I write a qmp 
> >> command to support fsdev_add, then a device_add qmp will
> >>
> >> successfully add a virtio-9p device(just like virtio-blk).
> >>
> >>Whether there is some concerns that we haven't support this ? hope for 
> >> a reply, thanks.
> >>
> >>
> >> sochin.
> >>  
> > Hi Sochin,
> >
> > I've just discovered this mail by chance. Please note I'm the only active
> > maintainer for virtio-9p. You really should check the content of MAINTAINERS
> > and Cc the appropriate people. :)
> >
> > Now, back to your question. Yes, there's only some partial support for
> > hot-plug/unplug. Mostly because nobody cared to work on it I guess.
> >
> > So, indeed, we don't have fsdev_add/fsdev_del, ie, we can only rely on
> > shared directories specified on the QEMU command line.
> >
> > On the virtio-9p device side, hotplug is supported, ie, device_add virtio-9p
> > works as expected.
> >
> > Hot-unplug is different as it requires some coordination with the guest. The
> > current status is that it requires the 9p shared directory to be unmounted
> > in the guest: if you do device_del while the directory is mounted in a linux
> > guest, you'll get these messages in the guest syslog:
> >
> > kernel:9pnet_virtio virtio2: p9_virtio_remove: waiting for device in use.
> >
> > If the 9p directory is unmounted at some point, then the hot unplug
> > sequence will eventually succeed.
> >
> > But this shouldn't be done like this: the guest should cancel inflight
> > requests and cause any new I/O requests in the guest to fail right away.
> > I have a tentative patch for the linux driver I can share if you want.
> >
> > Cheers,
> >
> > --
> > Greg
> >
> > .
> >  
> 
> 
> Thanks, Greg.
> 
> Indeed, I got those error messages while trying to do device_del with
> shared directory mounted in the guest. I really would like to see you patch 
> for linux driver.
> 

I'll try to send it soon.

> About virtio-9p, we are actually considering to use it for hypervisor-based 
> container, 

Something like Intel's Clear Containers ?

> a support of hotplug/unplug will be better, also I believe there will be more 
> people to work on it.

That's good news !

Cheers,

--
Greg

> 
> 
> Sochin.
> 
> 
> 




Re: [Qemu-devel] virtio-9p: hot-plug/unplug about virtio-9p device

2018-01-10 Thread sochin . jiang


On 2018/1/8 18:10, Greg Kurz wrote:
> On Tue, 19 Dec 2017 13:41:12 +0800
> sochin.jiang  wrote:
>
>>  Hi, guys.
>>
>> I'm looking for the hot-plug/unplug features of virtio-9p device 
>> recently, and found there's a lack of support.
>>
>> I am wondering why ? Is there a reason. Actually, I write a qmp command 
>> to support fsdev_add, then a device_add qmp will
>>
>> successfully add a virtio-9p device(just like virtio-blk).
>>
>>Whether there is some concerns that we haven't support this ? hope for a 
>> reply, thanks.
>>
>>
>> sochin.
>>
> Hi Sochin,
>
> I've just discovered this mail by chance. Please note I'm the only active
> maintainer for virtio-9p. You really should check the content of MAINTAINERS
> and Cc the appropriate people. :)
>
> Now, back to your question. Yes, there's only some partial support for
> hot-plug/unplug. Mostly because nobody cared to work on it I guess.
>
> So, indeed, we don't have fsdev_add/fsdev_del, ie, we can only rely on
> shared directories specified on the QEMU command line.
>
> On the virtio-9p device side, hotplug is supported, ie, device_add virtio-9p
> works as expected.
>
> Hot-unplug is different as it requires some coordination with the guest. The
> current status is that it requires the 9p shared directory to be unmounted
> in the guest: if you do device_del while the directory is mounted in a linux
> guest, you'll get these messages in the guest syslog:
>
> kernel:9pnet_virtio virtio2: p9_virtio_remove: waiting for device in use.
>
> If the 9p directory is unmounted at some point, then the hot unplug
> sequence will eventually succeed.
>
> But this shouldn't be done like this: the guest should cancel inflight
> requests and cause any new I/O requests in the guest to fail right away.
> I have a tentative patch for the linux driver I can share if you want.
>
> Cheers,
>
> --
> Greg
>
> .
>


Thanks, Greg.

Indeed, I got those error messages while trying to do device_del with
shared directory mounted in the guest. I really would like to see you patch for 
linux driver.

About virtio-9p, we are actually considering to use it for hypervisor-based 
container, 
a support of hotplug/unplug will be better, also I believe there will be more 
people to work on it.


Sochin.






Re: [Qemu-devel] virtio-9p: hot-plug/unplug about virtio-9p device

2018-01-08 Thread Greg Kurz
On Tue, 19 Dec 2017 13:41:12 +0800
sochin.jiang  wrote:

>  Hi, guys.
> 
> I'm looking for the hot-plug/unplug features of virtio-9p device 
> recently, and found there's a lack of support.
> 
> I am wondering why ? Is there a reason. Actually, I write a qmp command 
> to support fsdev_add, then a device_add qmp will
> 
> successfully add a virtio-9p device(just like virtio-blk).
> 
>Whether there is some concerns that we haven't support this ? hope for a 
> reply, thanks.
> 
> 
> sochin.
> 

Hi Sochin,

I've just discovered this mail by chance. Please note I'm the only active
maintainer for virtio-9p. You really should check the content of MAINTAINERS
and Cc the appropriate people. :)

Now, back to your question. Yes, there's only some partial support for
hot-plug/unplug. Mostly because nobody cared to work on it I guess.

So, indeed, we don't have fsdev_add/fsdev_del, ie, we can only rely on
shared directories specified on the QEMU command line.

On the virtio-9p device side, hotplug is supported, ie, device_add virtio-9p
works as expected.

Hot-unplug is different as it requires some coordination with the guest. The
current status is that it requires the 9p shared directory to be unmounted
in the guest: if you do device_del while the directory is mounted in a linux
guest, you'll get these messages in the guest syslog:

kernel:9pnet_virtio virtio2: p9_virtio_remove: waiting for device in use.

If the 9p directory is unmounted at some point, then the hot unplug
sequence will eventually succeed.

But this shouldn't be done like this: the guest should cancel inflight
requests and cause any new I/O requests in the guest to fail right away.
I have a tentative patch for the linux driver I can share if you want.

Cheers,

--
Greg



Re: [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-11 Thread Pradeep Kiruvale
Hi Greg,

Yes, it was nothing to do with the virtio-9p. I was writing at 4k Blocks
now I changed it to 1K Block. This works fine for me.

Thanks for your help.

Regards,
Pradeep

On 8 April 2016 at 16:58, Greg Kurz  wrote:

> On Fri, 8 Apr 2016 14:55:29 +0200
> Pradeep Kiruvale  wrote:
>
> > Hi Greg,
> >
> > FInd my replies inline
> >
> > >
> > > > Below is the way how I add to blkio
> > > >
> > > > echo "8:16 8388608" >
> > > > /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> > > >
> > >
> > > Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
> > > tasks in the test cgroup... but what about the tasks themselves ?
> > >
> > > > The problem I guess is adding these task ids to the "tasks" file in
> > > cgroup
> > > >
> > >
> > > Exactly. :)
> > >
> > > > These threads are started randomly and even then I add the PIDs to
> the
> > > > tasks file the cgroup still does not do IO control.
> > > >
> > >
> > > How did you get the PIDs ? Are you sure these threads you have added
> to the
> > > cgroup are the ones that write to /dev/sdb ?
> > >
> >
> > *Yes, I get PIDs from /proc/Qemu_PID/task*
> >
>
> And then you echoed the PIDs to /sys/fs/cgroup/blkio/test/tasks ?
>
> This is racy... another IO thread may be started to do some work on
> /dev/sdb
> just after you've read PIDs from /proc/Qemu_PID/task, and it won't be part
> of the cgroup.
>
> >
> >
> > >
> > > > Is it possible to reduce these number of threads? I see different
> number
> > > of
> > > > threads doing IO at different runs.
> > > >
> > >
> > > AFAIK, no.
> > >
> > > Why don't you simply start QEMU in the cgroup ? Unless I miss
> something,
> > > all
> > > children threads, including the 9p ones, will be in the cgroup and
> honor
> > > the
> > > throttle setttings.
> > >
> >
> >
> > *I started the qemu with cgroup as below*
> >
> > *cgexec -g blkio:/test qemu...*
> > *Is there any other way of starting the qemu in cgroup?*
> >
>
> Maybe you can pass --sticky to cgexec to prevent cgred from moving
> children tasks to other cgroups...
>
> There's also the old fashion method:
>
> # echo $$ > /sys/fs/cgroup/blkio/test/tasks
> # qemu.
>
> This being said, QEMU is a regular userspace program that is completely
> cgroup
> agnostic. It won't behave differently than 'dd if=/dev/sdb of=/dev/null'.
>
> This really doesn't look like a QEMU related issue to me.
>
> > Regards,
> > Pradeep
> >
>
> Cheers.
>
> --
> Greg
>
> >
> > >
> > > > Regards,
> > > > Pradeep
> > > >
> > >
> > > Cheers.
> > >
> > > --
> > > Greg
> > >
> > > >
> > > > On 8 April 2016 at 10:10, Greg Kurz 
> wrote:
> > > >
> > > > > On Thu, 7 Apr 2016 11:48:27 +0200
> > > > > Pradeep Kiruvale  wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I am using virtio-9p for sharing the file between host and
> guest. To
> > > test
> > > > > > the shared file I do read/write options in the guest.To have
> > > controlled
> > > > > io,
> > > > > > I am using cgroup blkio.
> > > > > >
> > > > > > While using cgroup I am facing two issues,Please find the issues
> > > below.
> > > > > >
> > > > > > 1. When I do IO throttling using the cgroup the read throttling
> works
> > > > > fine
> > > > > > but the write throttling does not wok. It still bypasses these
> > > throttling
> > > > > > control and does the default, am I missing something here?
> > > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > > Can you provide details on your blkio setup ?
> > > > >
> > > > > > I use the following commands to create VM, share the files and to
> > > > > > read/write from guest.
> > > > > >
> > > > > > *Create vm*
> > > > > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m
> 128
> > > -smp 1
> > > > > > -enable-kvm -parallel  -fsdev
> > > > > >
> local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > > > > -device
> > > > > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > > > > >
> > > > > > *Mount file*
> > > > > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4
> > > 2>>dd.log &&
> > > > > > sync
> > > > > >
> > > > > > touch /sdb1_ext4/dddrive
> > > > > >
> > > > > > *Write test*
> > > > > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80
> > > oflag=direct >>
> > > > > > dd.log 2>&1 && sync
> > > > > >
> > > > > > *Read test*
> > > > > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > > > > >
> > > > > > 2. The other issue is when I run "dd" command inside guest  it
> > > creates
> > > > > > multiple threads to write/read. I can see those on host using
> iotop
> > > is
> > > > > this
> > > > > > expected behavior?
> > > > > >
> > > > >
> > > > > Yes. QEMU uses a thread pool to handle 9p requests.
> > > > >
> > > > > > Regards,
> > > > > > Pradeep
> > > > >
> > > > > Cheers.
> > > > >
> > > > > --
> > > > > Greg
> > > > >
> > > > >
> > >
> > >
>
>
>


Re: [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Fri, 8 Apr 2016 14:55:29 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 
> FInd my replies inline
> 
> >
> > > Below is the way how I add to blkio
> > >
> > > echo "8:16 8388608" >
> > > /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> > >
> >
> > Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
> > tasks in the test cgroup... but what about the tasks themselves ?
> >
> > > The problem I guess is adding these task ids to the "tasks" file in
> > cgroup
> > >
> >
> > Exactly. :)
> >
> > > These threads are started randomly and even then I add the PIDs to the
> > > tasks file the cgroup still does not do IO control.
> > >
> >
> > How did you get the PIDs ? Are you sure these threads you have added to the
> > cgroup are the ones that write to /dev/sdb ?
> >
> 
> *Yes, I get PIDs from /proc/Qemu_PID/task*
> 

And then you echoed the PIDs to /sys/fs/cgroup/blkio/test/tasks ?

This is racy... another IO thread may be started to do some work on /dev/sdb
just after you've read PIDs from /proc/Qemu_PID/task, and it won't be part
of the cgroup.

> 
> 
> >
> > > Is it possible to reduce these number of threads? I see different number
> > of
> > > threads doing IO at different runs.
> > >
> >
> > AFAIK, no.
> >
> > Why don't you simply start QEMU in the cgroup ? Unless I miss something,
> > all
> > children threads, including the 9p ones, will be in the cgroup and honor
> > the
> > throttle setttings.
> >
> 
> 
> *I started the qemu with cgroup as below*
> 
> *cgexec -g blkio:/test qemu...*
> *Is there any other way of starting the qemu in cgroup?*
> 

Maybe you can pass --sticky to cgexec to prevent cgred from moving
children tasks to other cgroups...

There's also the old fashion method:

# echo $$ > /sys/fs/cgroup/blkio/test/tasks
# qemu.

This being said, QEMU is a regular userspace program that is completely cgroup
agnostic. It won't behave differently than 'dd if=/dev/sdb of=/dev/null'.

This really doesn't look like a QEMU related issue to me.

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> 
> >
> > > Regards,
> > > Pradeep
> > >
> >
> > Cheers.
> >
> > --
> > Greg
> >
> > >
> > > On 8 April 2016 at 10:10, Greg Kurz  wrote:
> > >
> > > > On Thu, 7 Apr 2016 11:48:27 +0200
> > > > Pradeep Kiruvale  wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I am using virtio-9p for sharing the file between host and guest. To
> > test
> > > > > the shared file I do read/write options in the guest.To have
> > controlled
> > > > io,
> > > > > I am using cgroup blkio.
> > > > >
> > > > > While using cgroup I am facing two issues,Please find the issues
> > below.
> > > > >
> > > > > 1. When I do IO throttling using the cgroup the read throttling works
> > > > fine
> > > > > but the write throttling does not wok. It still bypasses these
> > throttling
> > > > > control and does the default, am I missing something here?
> > > > >
> > > >
> > > > Hi,
> > > >
> > > > Can you provide details on your blkio setup ?
> > > >
> > > > > I use the following commands to create VM, share the files and to
> > > > > read/write from guest.
> > > > >
> > > > > *Create vm*
> > > > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128
> > -smp 1
> > > > > -enable-kvm -parallel  -fsdev
> > > > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > > > -device
> > > > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > > > >
> > > > > *Mount file*
> > > > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4
> > 2>>dd.log &&
> > > > > sync
> > > > >
> > > > > touch /sdb1_ext4/dddrive
> > > > >
> > > > > *Write test*
> > > > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80
> > oflag=direct >>
> > > > > dd.log 2>&1 && sync
> > > > >
> > > > > *Read test*
> > > > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > > > >
> > > > > 2. The other issue is when I run "dd" command inside guest  it
> > creates
> > > > > multiple threads to write/read. I can see those on host using iotop
> > is
> > > > this
> > > > > expected behavior?
> > > > >
> > > >
> > > > Yes. QEMU uses a thread pool to handle 9p requests.
> > > >
> > > > > Regards,
> > > > > Pradeep
> > > >
> > > > Cheers.
> > > >
> > > > --
> > > > Greg
> > > >
> > > >
> >
> >




Re: [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Pradeep Kiruvale
Hi Greg,

FInd my replies inline

>
> > Below is the way how I add to blkio
> >
> > echo "8:16 8388608" >
> > /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> >
>
> Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
> tasks in the test cgroup... but what about the tasks themselves ?
>
> > The problem I guess is adding these task ids to the "tasks" file in
> cgroup
> >
>
> Exactly. :)
>
> > These threads are started randomly and even then I add the PIDs to the
> > tasks file the cgroup still does not do IO control.
> >
>
> How did you get the PIDs ? Are you sure these threads you have added to the
> cgroup are the ones that write to /dev/sdb ?
>

*Yes, I get PIDs from /proc/Qemu_PID/task*



>
> > Is it possible to reduce these number of threads? I see different number
> of
> > threads doing IO at different runs.
> >
>
> AFAIK, no.
>
> Why don't you simply start QEMU in the cgroup ? Unless I miss something,
> all
> children threads, including the 9p ones, will be in the cgroup and honor
> the
> throttle setttings.
>


*I started the qemu with cgroup as below*

*cgexec -g blkio:/test qemu...*
*Is there any other way of starting the qemu in cgroup?*

Regards,
Pradeep


>
> > Regards,
> > Pradeep
> >
>
> Cheers.
>
> --
> Greg
>
> >
> > On 8 April 2016 at 10:10, Greg Kurz  wrote:
> >
> > > On Thu, 7 Apr 2016 11:48:27 +0200
> > > Pradeep Kiruvale  wrote:
> > >
> > > > Hi All,
> > > >
> > > > I am using virtio-9p for sharing the file between host and guest. To
> test
> > > > the shared file I do read/write options in the guest.To have
> controlled
> > > io,
> > > > I am using cgroup blkio.
> > > >
> > > > While using cgroup I am facing two issues,Please find the issues
> below.
> > > >
> > > > 1. When I do IO throttling using the cgroup the read throttling works
> > > fine
> > > > but the write throttling does not wok. It still bypasses these
> throttling
> > > > control and does the default, am I missing something here?
> > > >
> > >
> > > Hi,
> > >
> > > Can you provide details on your blkio setup ?
> > >
> > > > I use the following commands to create VM, share the files and to
> > > > read/write from guest.
> > > >
> > > > *Create vm*
> > > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128
> -smp 1
> > > > -enable-kvm -parallel  -fsdev
> > > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > > -device
> > > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > > >
> > > > *Mount file*
> > > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4
> 2>>dd.log &&
> > > > sync
> > > >
> > > > touch /sdb1_ext4/dddrive
> > > >
> > > > *Write test*
> > > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80
> oflag=direct >>
> > > > dd.log 2>&1 && sync
> > > >
> > > > *Read test*
> > > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > > >
> > > > 2. The other issue is when I run "dd" command inside guest  it
> creates
> > > > multiple threads to write/read. I can see those on host using iotop
> is
> > > this
> > > > expected behavior?
> > > >
> > >
> > > Yes. QEMU uses a thread pool to handle 9p requests.
> > >
> > > > Regards,
> > > > Pradeep
> > >
> > > Cheers.
> > >
> > > --
> > > Greg
> > >
> > >
>
>


Re: [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Fri, 8 Apr 2016 11:51:05 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 
> Thanks for your reply.
> 
> Below is the way how I add to blkio
> 
> echo "8:16 8388608" >
> /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> 

Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
tasks in the test cgroup... but what about the tasks themselves ?

> The problem I guess is adding these task ids to the "tasks" file in cgroup
> 

Exactly. :)

> These threads are started randomly and even then I add the PIDs to the
> tasks file the cgroup still does not do IO control.
> 

How did you get the PIDs ? Are you sure these threads you have added to the
cgroup are the ones that write to /dev/sdb ?

> Is it possible to reduce these number of threads? I see different number of
> threads doing IO at different runs.
> 

AFAIK, no.

Why don't you simply start QEMU in the cgroup ? Unless I miss something, all
children threads, including the 9p ones, will be in the cgroup and honor the
throttle setttings.

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> 
> On 8 April 2016 at 10:10, Greg Kurz  wrote:
> 
> > On Thu, 7 Apr 2016 11:48:27 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > Hi All,
> > >
> > > I am using virtio-9p for sharing the file between host and guest. To test
> > > the shared file I do read/write options in the guest.To have controlled
> > io,
> > > I am using cgroup blkio.
> > >
> > > While using cgroup I am facing two issues,Please find the issues below.
> > >
> > > 1. When I do IO throttling using the cgroup the read throttling works
> > fine
> > > but the write throttling does not wok. It still bypasses these throttling
> > > control and does the default, am I missing something here?
> > >
> >
> > Hi,
> >
> > Can you provide details on your blkio setup ?
> >
> > > I use the following commands to create VM, share the files and to
> > > read/write from guest.
> > >
> > > *Create vm*
> > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> > > -enable-kvm -parallel  -fsdev
> > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > -device
> > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > >
> > > *Mount file*
> > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> > > sync
> > >
> > > touch /sdb1_ext4/dddrive
> > >
> > > *Write test*
> > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> > > dd.log 2>&1 && sync
> > >
> > > *Read test*
> > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > >
> > > 2. The other issue is when I run "dd" command inside guest  it creates
> > > multiple threads to write/read. I can see those on host using iotop is
> > this
> > > expected behavior?
> > >
> >
> > Yes. QEMU uses a thread pool to handle 9p requests.
> >
> > > Regards,
> > > Pradeep
> >
> > Cheers.
> >
> > --
> > Greg
> >
> >




Re: [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Pradeep Kiruvale
Hi Greg,

Thanks for your reply.

Below is the way how I add to blkio

echo "8:16 8388608" >
/sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device

The problem I guess is adding these task ids to the "tasks" file in cgroup

These threads are started randomly and even then I add the PIDs to the
tasks file the cgroup still does not do IO control.

Is it possible to reduce these number of threads? I see different number of
threads doing IO at different runs.

Regards,
Pradeep


On 8 April 2016 at 10:10, Greg Kurz  wrote:

> On Thu, 7 Apr 2016 11:48:27 +0200
> Pradeep Kiruvale  wrote:
>
> > Hi All,
> >
> > I am using virtio-9p for sharing the file between host and guest. To test
> > the shared file I do read/write options in the guest.To have controlled
> io,
> > I am using cgroup blkio.
> >
> > While using cgroup I am facing two issues,Please find the issues below.
> >
> > 1. When I do IO throttling using the cgroup the read throttling works
> fine
> > but the write throttling does not wok. It still bypasses these throttling
> > control and does the default, am I missing something here?
> >
>
> Hi,
>
> Can you provide details on your blkio setup ?
>
> > I use the following commands to create VM, share the files and to
> > read/write from guest.
> >
> > *Create vm*
> > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> > -enable-kvm -parallel  -fsdev
> > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> -device
> > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> >
> > *Mount file*
> > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> > sync
> >
> > touch /sdb1_ext4/dddrive
> >
> > *Write test*
> > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> > dd.log 2>&1 && sync
> >
> > *Read test*
> > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> >
> > 2. The other issue is when I run "dd" command inside guest  it creates
> > multiple threads to write/read. I can see those on host using iotop is
> this
> > expected behavior?
> >
>
> Yes. QEMU uses a thread pool to handle 9p requests.
>
> > Regards,
> > Pradeep
>
> Cheers.
>
> --
> Greg
>
>


Re: [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Thu, 7 Apr 2016 11:48:27 +0200
Pradeep Kiruvale  wrote:

> Hi All,
> 
> I am using virtio-9p for sharing the file between host and guest. To test
> the shared file I do read/write options in the guest.To have controlled io,
> I am using cgroup blkio.
> 
> While using cgroup I am facing two issues,Please find the issues below.
> 
> 1. When I do IO throttling using the cgroup the read throttling works fine
> but the write throttling does not wok. It still bypasses these throttling
> control and does the default, am I missing something here?
> 

Hi,

Can you provide details on your blkio setup ?

> I use the following commands to create VM, share the files and to
> read/write from guest.
> 
> *Create vm*
> qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> -enable-kvm -parallel  -fsdev
> local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate -device
> virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> 
> *Mount file*
> mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> sync
> 
> touch /sdb1_ext4/dddrive
> 
> *Write test*
> dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> dd.log 2>&1 && sync
> 
> *Read test*
> dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> 
> 2. The other issue is when I run "dd" command inside guest  it creates
> multiple threads to write/read. I can see those on host using iotop is this
> expected behavior?
> 

Yes. QEMU uses a thread pool to handle 9p requests.

> Regards,
> Pradeep

Cheers.

--
Greg




Re: [Qemu-devel] Virtio-9p

2016-04-04 Thread Pradeep Kiruvale
 Hi Greg,

Thanks for your reply.




>
> I'm Cc'ing qemu-devel like in your previous posts, so QEMU experts may
> jump in.
>
> > What I understand from the requirement for our project is if we use
> > virtio-blk it caches the pages in the guest. We would like to avoid that
>
> AFAIK this is true when you pass cache=none to -drive on the QEMU command
> line. But there are other options such as writeback or writethrough, which
> rely on the host page cache.
>
> Yes, we did explore these options already.


> > and also we want to share those pages across multiple guests and when
> they
> > need some data they should get it from the host instead of using the
> cached
> > data at each guest.
> >
>
> So you want all the guests to attach to the same block device or backing
> file in
> the host, correct ? AFAIK we cannot do that with virtio-blk indeed... and
> virtio-9p
> is only about sharing files, not block devices.
>
>
Yes, we want to share the files.


> Maybe you could share a big file between the host and all guests with 9p,
> and
> each guest can use the loop device to access the file as a block device...
> but
> even then, you'd have to deal with concurrent accesses...
>
> > Basically we are trying to cut down the memory foot print of the guests.
> >
>
> If you're using KVM and your guests run the same distro or application,
> you may try to use KSM (Kernel Same-page Merging) in the host.
>
> We are using these things also, we want to reduce the foot print as much
as possible.

Regards,
Pradeep


> > Regards,
> > Pradeep
> >
> > On 31 March 2016 at 18:12, Greg Kurz  wrote:
> >
> > > On Wed, 30 Mar 2016 16:27:48 +0200
> > > Pradeep Kiruvale  wrote:
> > >
> > > > Hi Greg,
> > > >
> > >
> > > Hi Pradeep,
> > >
> > > > Thanks for the reply.
> > > >
> > > > Let me put it this way, virtio-blk-pci is used for block IO on the
> > > devices
> > > > shared between the guest and the host.
> > >
> > > I don't really understand the "devices shared between the guest and the
> > > host" wording... virtio-blk-pci exposes a virtio-blk device through PCI
> > > to the guest. The virtio-blk device can be backed by a file or a block
> > > device from the host.
> > >
> > > > Here I want to share the file and have QoS between the guests. So I
> am
> > > > using the Virtio-9p-pci.
> > > >
> > >
> > > What file ?
> > >
> > > > Basically I want to have QoS for virtio-9p-pci.
> > > >
> > >
> > > Can you provide a more detailed scenario on the result you want to
> reach ?
> > >
> > > > Regards,
> > > > Pradeep
> > > >
> > >
> > > Cheers.
> > >
> > > --
> > > Greg
> > >
> > > > On 30 March 2016 at 16:13, Greg Kurz 
> wrote:
> > > >
> > > > > On Wed, 30 Mar 2016 14:10:38 +0200
> > > > > Pradeep Kiruvale  wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > Is virtio-9p-pci device only supports the fsdev deices? I am
> trying
> > > to
> > > > > use
> > > > > > -drive option for applying QoS for block device using
> Virtio-9p-pci
> > > > > device,
> > > > > > but failing to create/add a device other than fsdev. Can you
> please
> > > help
> > > > > me
> > > > > > on this?
> > > > > >
> > > > > > Regards,
> > > > > > Pradeep
> > > > >
> > > > > Hi Pradeep,
> > > > >
> > > > > Not sure to catch what you want to do but I confirm that
> virti-9p-pci
> > > only
> > > > > supports
> > > > > fsdev... if you want a block device, why don't you use
> virtio-blk-pci ?
> > > > >
> > > > > Cheers.
> > > > >
> > > > > --
> > > > > Greg
> > > > >
> > > > >
> > >
> > >
>
>


Re: [Qemu-devel] Virtio-9p

2016-04-01 Thread Greg Kurz
On Fri, 1 Apr 2016 09:09:32 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 

Hi Pradeep,

I'm Cc'ing qemu-devel like in your previous posts, so QEMU experts may jump in.

> What I understand from the requirement for our project is if we use
> virtio-blk it caches the pages in the guest. We would like to avoid that

AFAIK this is true when you pass cache=none to -drive on the QEMU command
line. But there are other options such as writeback or writethrough, which
rely on the host page cache.

> and also we want to share those pages across multiple guests and when they
> need some data they should get it from the host instead of using the cached
> data at each guest.
> 

So you want all the guests to attach to the same block device or backing file in
the host, correct ? AFAIK we cannot do that with virtio-blk indeed... and 
virtio-9p
is only about sharing files, not block devices.

Maybe you could share a big file between the host and all guests with 9p, and
each guest can use the loop device to access the file as a block device... but
even then, you'd have to deal with concurrent accesses...

> Basically we are trying to cut down the memory foot print of the guests.
> 

If you're using KVM and your guests run the same distro or application,
you may try to use KSM (Kernel Same-page Merging) in the host.

> Regards,
> Pradeep
> 
> On 31 March 2016 at 18:12, Greg Kurz  wrote:
> 
> > On Wed, 30 Mar 2016 16:27:48 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > Hi Greg,
> > >
> >
> > Hi Pradeep,
> >
> > > Thanks for the reply.
> > >
> > > Let me put it this way, virtio-blk-pci is used for block IO on the
> > devices
> > > shared between the guest and the host.
> >
> > I don't really understand the "devices shared between the guest and the
> > host" wording... virtio-blk-pci exposes a virtio-blk device through PCI
> > to the guest. The virtio-blk device can be backed by a file or a block
> > device from the host.
> >
> > > Here I want to share the file and have QoS between the guests. So I am
> > > using the Virtio-9p-pci.
> > >
> >
> > What file ?
> >
> > > Basically I want to have QoS for virtio-9p-pci.
> > >
> >
> > Can you provide a more detailed scenario on the result you want to reach ?
> >
> > > Regards,
> > > Pradeep
> > >
> >
> > Cheers.
> >
> > --
> > Greg
> >
> > > On 30 March 2016 at 16:13, Greg Kurz  wrote:
> > >
> > > > On Wed, 30 Mar 2016 14:10:38 +0200
> > > > Pradeep Kiruvale  wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Is virtio-9p-pci device only supports the fsdev deices? I am trying
> > to
> > > > use
> > > > > -drive option for applying QoS for block device using Virtio-9p-pci
> > > > device,
> > > > > but failing to create/add a device other than fsdev. Can you please
> > help
> > > > me
> > > > > on this?
> > > > >
> > > > > Regards,
> > > > > Pradeep
> > > >
> > > > Hi Pradeep,
> > > >
> > > > Not sure to catch what you want to do but I confirm that virti-9p-pci
> > only
> > > > supports
> > > > fsdev... if you want a block device, why don't you use virtio-blk-pci ?
> > > >
> > > > Cheers.
> > > >
> > > > --
> > > > Greg
> > > >
> > > >
> >
> >




Re: [Qemu-devel] Virtio-9p

2016-03-31 Thread Greg Kurz
On Wed, 30 Mar 2016 16:27:48 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 

Hi Pradeep,

> Thanks for the reply.
> 
> Let me put it this way, virtio-blk-pci is used for block IO on the devices
> shared between the guest and the host.

I don't really understand the "devices shared between the guest and the
host" wording... virtio-blk-pci exposes a virtio-blk device through PCI
to the guest. The virtio-blk device can be backed by a file or a block
device from the host.

> Here I want to share the file and have QoS between the guests. So I am
> using the Virtio-9p-pci.
> 

What file ?

> Basically I want to have QoS for virtio-9p-pci.
> 

Can you provide a more detailed scenario on the result you want to reach ?

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> On 30 March 2016 at 16:13, Greg Kurz  wrote:
> 
> > On Wed, 30 Mar 2016 14:10:38 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > Hi All,
> > >
> > > Is virtio-9p-pci device only supports the fsdev deices? I am trying to
> > use
> > > -drive option for applying QoS for block device using Virtio-9p-pci
> > device,
> > > but failing to create/add a device other than fsdev. Can you please help
> > me
> > > on this?
> > >
> > > Regards,
> > > Pradeep
> >
> > Hi Pradeep,
> >
> > Not sure to catch what you want to do but I confirm that virti-9p-pci only
> > supports
> > fsdev... if you want a block device, why don't you use virtio-blk-pci ?
> >
> > Cheers.
> >
> > --
> > Greg
> >
> >




Re: [Qemu-devel] Virtio-9p

2016-03-30 Thread Pradeep Kiruvale
Hi Greg,

Thanks for the reply.

Let me put it this way, virtio-blk-pci is used for block IO on the devices
shared between the guest and the host.
Here I want to share the file and have QoS between the guests. So I am
using the Virtio-9p-pci.

Basically I want to have QoS for virtio-9p-pci.

Regards,
Pradeep

On 30 March 2016 at 16:13, Greg Kurz  wrote:

> On Wed, 30 Mar 2016 14:10:38 +0200
> Pradeep Kiruvale  wrote:
>
> > Hi All,
> >
> > Is virtio-9p-pci device only supports the fsdev deices? I am trying to
> use
> > -drive option for applying QoS for block device using Virtio-9p-pci
> device,
> > but failing to create/add a device other than fsdev. Can you please help
> me
> > on this?
> >
> > Regards,
> > Pradeep
>
> Hi Pradeep,
>
> Not sure to catch what you want to do but I confirm that virti-9p-pci only
> supports
> fsdev... if you want a block device, why don't you use virtio-blk-pci ?
>
> Cheers.
>
> --
> Greg
>
>


Re: [Qemu-devel] Virtio-9p

2016-03-30 Thread Greg Kurz
On Wed, 30 Mar 2016 14:10:38 +0200
Pradeep Kiruvale  wrote:

> Hi All,
> 
> Is virtio-9p-pci device only supports the fsdev deices? I am trying to use
> -drive option for applying QoS for block device using Virtio-9p-pci device,
> but failing to create/add a device other than fsdev. Can you please help me
> on this?
> 
> Regards,
> Pradeep

Hi Pradeep,

Not sure to catch what you want to do but I confirm that virti-9p-pci only 
supports
fsdev... if you want a block device, why don't you use virtio-blk-pci ?

Cheers.

--
Greg




Re: [Qemu-devel] virtio-9p

2015-08-10 Thread Linda


On 8/10/2015 4:10 AM, Stefan Hajnoczi wrote:

On Fri, Aug 07, 2015 at 10:21:47AM -0600, Linda wrote:
 As background, for the backend, I have been looking at the code, 
written
by Anthony Liguori, and maintained by Aneesh Kumar (who I sent this 
email
to, earlier).  It looks to me (please correct me if I'm wrong, on 
this or

any other point, below) as if Anthony wrote not just a backend transport
layer, but the server as well.  AFAICT, there is no other Linux 9p 
server.

There are other Linux 9P servers.  At least there is diod:
https://github.com/chaos/diod

Thank you.  I will look into that.

Anthony Liguori didn't write all of the virtio-9p code in QEMU.  Aneesh
Kumar, JV Rao, M. Mohan Kumar, and Harsh Prateek Bora did a lot of the
9P server development in QEMU.

Take a look at git shortlog -nse hw/9pfs

 virtio-9p.c contains a lot of this server code, the rest spread 
between
13 other files which handle all file access operations, converting 
them from

9p to Linux file system calls.
 virtio-9p.c also contains some virtio-specific code (although 
most of

that is in virtio-device.c).

The problems I am encountering are the following:

1.  In the virio-9p.h is a struct V9fsPDU that contains an element 
(in the

middle of the struct) of type VirtQueueElement. Every 9p I/O command
handler, as well as co-routines and support functions that go with them
(i.e., a large part of the server), passes a parameter that is a struct
V9fsPDU.   Almost all of these use only the variable that defines state
information, and never touch the VirtQueueElement member.
 The easiest fix for this is to have a separate header file with a
#define GENERIC_9P_SERVER
 Then I could modify the virtio-9p.h with:
 #ifdef GENERIC_9P_SERVER
a union of a void *, a char * (what I use), and a
VirtQueueElement (guaranteeing the size is unchanged)
 #else
 VirtQueueElementelem;
 #endif

 It's not my favorite construct, but it would involve the least 
amount of

changes to the code.   Before I modify a header file, that code, I'm not
touching, is dependent on, I wanted to know if this is an OK way.  If 
not,

is there another way (short of copying fourteen files, and changing the
names of all the functions in them, as well as the file names), that you
would prefer?

What is the goal of your project?

If you just want a Linux 9P server, use diod.  You might be able to find
other servers that suit your needs better too (e.g. programming
language, features, etc).

An #ifdef is ugly and if you are going to submit code upstream then a
cleaner solution should be used.  Either separate a VirtIO9fsPDU struct
that contains the generic 9pfsPDU as a field (so that container_of() can
be used to go from 9pfsPDU back to VirtIO9fsPDU).  Or add a void* into
the generic 9pfsPDU so transports can correlate the generic struct with
a transport-specific struct.
I agree about ifdefs being ugly.  I guess I was just trying to save 
space - all I did was add a void * and a pointer to a function


 2.  The other problem, is that most of the "server" functions 
described

above, end by calling complete_pdu.   Complete_pdu (which is defined in
virtio-9p.c) does many things that are generic, and also a few virito
specific operations (pushing to the virtqueue, etc.).
 Again, I can use a similar mechanism to the above.  Or is there 
some
other way you'd prefer? I'm trying to find a way that has the least 
impact

on virtio/qemu maintainers.

The generic PDU struct could have a .complete() function pointer. This
is how the SCSI subsystem works, for example.  scsi_req_complete() calls
req->bus->info->complete(req, req->status, req->resid) so that the
bus-specific completion behavior is invoked.
It has a function pointer to a function complete that returns a pointer 
to a Coroutine.   But it uses (in dozens of places) a straight call to a 
complete function.


I will look into diod.   If there are any problems with it, I'll take 
your suggestions above.


Thanks.

Linda


Stefan




Re: [Qemu-devel] virtio-9p

2015-08-10 Thread Stefan Hajnoczi
On Fri, Aug 07, 2015 at 10:21:47AM -0600, Linda wrote:
> As background, for the backend, I have been looking at the code, written
> by Anthony Liguori, and maintained by Aneesh Kumar (who I sent this email
> to, earlier).  It looks to me (please correct me if I'm wrong, on this or
> any other point, below) as if Anthony wrote not just a backend transport
> layer, but the server as well.  AFAICT, there is no other Linux 9p server.

There are other Linux 9P servers.  At least there is diod:
https://github.com/chaos/diod

Anthony Liguori didn't write all of the virtio-9p code in QEMU.  Aneesh
Kumar, JV Rao, M. Mohan Kumar, and Harsh Prateek Bora did a lot of the
9P server development in QEMU.

Take a look at git shortlog -nse hw/9pfs

> virtio-9p.c contains a lot of this server code, the rest spread between
> 13 other files which handle all file access operations, converting them from
> 9p to Linux file system calls.
> virtio-9p.c also contains some virtio-specific code (although most of
> that is in virtio-device.c).
> 
> The problems I am encountering are the following:
> 
> 1.  In the virio-9p.h is a struct V9fsPDU that contains an element (in the
> middle of the struct) of type VirtQueueElement. Every 9p I/O command
> handler, as well as co-routines and support functions that go with them
> (i.e., a large part of the server), passes a parameter that is a struct
> V9fsPDU.   Almost all of these use only the variable that defines state
> information, and never touch the VirtQueueElement member.
> The easiest fix for this is to have a separate header file with a
> #define GENERIC_9P_SERVER
> Then I could modify the virtio-9p.h with:
> #ifdef GENERIC_9P_SERVER
>a union of a void *, a char * (what I use), and a
> VirtQueueElement (guaranteeing the size is unchanged)
> #else
> VirtQueueElementelem;
> #endif
> 
> It's not my favorite construct, but it would involve the least amount of
> changes to the code.   Before I modify a header file, that code, I'm not
> touching, is dependent on, I wanted to know if this is an OK way.  If not,
> is there another way (short of copying fourteen files, and changing the
> names of all the functions in them, as well as the file names), that you
> would prefer?

What is the goal of your project?

If you just want a Linux 9P server, use diod.  You might be able to find
other servers that suit your needs better too (e.g. programming
language, features, etc).

An #ifdef is ugly and if you are going to submit code upstream then a
cleaner solution should be used.  Either separate a VirtIO9fsPDU struct
that contains the generic 9pfsPDU as a field (so that container_of() can
be used to go from 9pfsPDU back to VirtIO9fsPDU).  Or add a void* into
the generic 9pfsPDU so transports can correlate the generic struct with
a transport-specific struct.

> 2.  The other problem, is that most of the "server" functions described
> above, end by calling complete_pdu.   Complete_pdu (which is defined in
> virtio-9p.c) does many things that are generic, and also a few virito
> specific operations (pushing to the virtqueue, etc.).
> Again, I can use a similar mechanism to the above.  Or is there some
> other way you'd prefer? I'm trying to find a way that has the least impact
> on virtio/qemu maintainers.

The generic PDU struct could have a .complete() function pointer.  This
is how the SCSI subsystem works, for example.  scsi_req_complete() calls
req->bus->info->complete(req, req->status, req->resid) so that the
bus-specific completion behavior is invoked.

Stefan


pgprF9gwu8huD.pgp
Description: PGP signature


Re: [Qemu-devel] Virtio 9p live migration patches

2013-04-04 Thread Benoît Canet

There is still the need to serialize the fid linked list.
I saw in the current code base that virtio-blk.c was using the old API to
serialize a linked list.

Does writing support for serializing linked list in vmstate worth it ?
Or is it better to keep the old API to serialize ?

Regards

Benoît

> Le Thursday 04 Apr 2013 à 15:42:06 (+0200), Paolo Bonzini a écrit :
> Il 04/04/2013 14:37, Benoît Canet ha scritto:
> > We also need to ensure new 9p request are blocked.
> 
> Migration runs with the VM paused.  It would be simplest to flush all
> the requests before migrating, that's what the block layer does.  (The
> migration of requests we have in virtio-blk, scsi-disk etc. is only for
> rerror/werror=stop; it is not invoked in the common case).
> 
> Paolo
> 



Re: [Qemu-devel] Virtio 9p live migration patches

2013-04-04 Thread Paolo Bonzini
Il 04/04/2013 14:37, Benoît Canet ha scritto:
> We also need to ensure new 9p request are blocked.

Migration runs with the VM paused.  It would be simplest to flush all
the requests before migrating, that's what the block layer does.  (The
migration of requests we have in virtio-blk, scsi-disk etc. is only for
rerror/werror=stop; it is not invoked in the common case).

Paolo



Re: [Qemu-devel] Virtio 9p live migration patches

2013-04-04 Thread Benoît Canet

Thanks for the explanations I'll start working on the patchset.

Best regards

Benoît

> Le Wednesday 03 Apr 2013 à 12:03:13 (+0530), Aneesh Kumar K.V a écrit :
> Benoît Canet  writes:
> 
> > Thanks a lot,
> >
> > Do you have an idea of what is left to be done on it ?
> >
> 
> It has to rewritten with the new model with respect to migration. Also
> these is problem of how much time does it take to migrate the fid
> information ?. We also need to ensure new 9p request are blocked.
> Another option is to add migration support to protocol and handle
> these in the clients. 
> 
> -aneesh
> 
> 



Re: [Qemu-devel] Virtio 9p live migration patches

2013-04-02 Thread Aneesh Kumar K.V
Benoît Canet  writes:

> Thanks a lot,
>
> Do you have an idea of what is left to be done on it ?
>

It has to rewritten with the new model with respect to migration. Also
these is problem of how much time does it take to migrate the fid
information ?. We also need to ensure new 9p request are blocked.
Another option is to add migration support to protocol and handle
these in the clients. 

-aneesh




Re: [Qemu-devel] Virtio 9p live migration patches

2013-04-02 Thread Benoît Canet

Thanks a lot,

Do you have an idea of what is left to be done on it ?

Best regards

Benoît

> Le Wednesday 03 Apr 2013 à 00:00:53 (+0530), Aneesh Kumar K.V a écrit :
> Benoît Canet  writes:
> 
> > Hello Aneesh,
> >
> > I am interested in working on 9p live migration.
> >
> > I remember that you had some live migration related patches on github.
> >
> > I don't find these patches anymore. Do you still have them somewhere ?
> > What was missing from it ?
> >
> 
> Had to redo the tree, because of the fork changes. Now pushed as
> live-migration branch 
> 
> git://github.com/kvaneesh/qemu.git live-migration
> 
> -aneesh
> 



Re: [Qemu-devel] Virtio 9p live migration patches

2013-04-02 Thread Aneesh Kumar K.V
Benoît Canet  writes:

> Hello Aneesh,
>
> I am interested in working on 9p live migration.
>
> I remember that you had some live migration related patches on github.
>
> I don't find these patches anymore. Do you still have them somewhere ?
> What was missing from it ?
>

Had to redo the tree, because of the fork changes. Now pushed as
live-migration branch 

git://github.com/kvaneesh/qemu.git live-migration

-aneesh




Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2012-03-07 Thread M. Mohan Kumar

Hi Anthony,

When I tried with ldconfig version 2.14.90, ldconfig successfully completed

QEMU version: 1.0.50
Kernel version: 3.3.0-rc6+

Could you please try with recent ldconfig?

On 02/22/2012 09:28 AM, C Anthony Risinger wrote:

On Sat, Feb 18, 2012 at 11:38 AM, Aneesh Kumar K.V
  wrote:

On Thu, 16 Feb 2012 06:20:21 -0600, C Anthony Risinger  wrote:

a) mapped FS security policy (xattrs) causes `ldconfig` to abort()?
root or normal user ...

somehow `ldconfig` gets a duplicate inode while constructing the
cache, even though it already de-duped (confirmed via gdb and grep --
only a single abort() in the source)

b) unable to run `locale-gen` on *any* virtfs configuration? (strace)

[...]
mmap(NULL, 536870912, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7fb3aac63000
mmap(0x7fb3aac63000, 103860, PROT_READ|PROT_WRITE,
MAP_SHARED|MAP_FIXED, 3, 0) = -1 EINVAL (Invalid argument)
cannot map archive header: Invalid argument

c) package files containing device nodes fail (maybe this is expected
...); specifically `/lib/udev/devices/loop0`


Is this with 9p2000.L ?. What is the guest kernel version ?

(not sure if list will accept this ... too much traffic! had to remove myself)

yes this is with 9p2000.L, both host and guests run kernel 3.2.5.  i'm
happy to provide/try additional information/tests if useful.

... is there really no chance of upping the max path?  seems like
config space will be a big constraint, forever :-(

and i'm very much willing to do additional testing for the other
issues as well (i had to revert to qemu-as-root to get passthru
working 100% on rootfs ... ldconfig is kind of critical :-).  are
these known issues?






Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2012-02-23 Thread Aneesh Kumar K.V
On Tue, 21 Feb 2012 21:58:39 -0600, C Anthony Risinger  wrote:
> On Sat, Feb 18, 2012 at 11:38 AM, Aneesh Kumar K.V
>  wrote:
> > On Thu, 16 Feb 2012 06:20:21 -0600, C Anthony Risinger  
> > wrote:
> >> a) mapped FS security policy (xattrs) causes `ldconfig` to abort()?
> >> root or normal user ...
> >>
> >> somehow `ldconfig` gets a duplicate inode while constructing the
> >> cache, even though it already de-duped (confirmed via gdb and grep --
> >> only a single abort() in the source)
> >>

I will try to reproduce this to get more info.


> >> b) unable to run `locale-gen` on *any* virtfs configuration? (strace)
> >>
> >> [...]
> >> mmap(NULL, 536870912, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> >> 0x7fb3aac63000
> >> mmap(0x7fb3aac63000, 103860, PROT_READ|PROT_WRITE,
> >> MAP_SHARED|MAP_FIXED, 3, 0) = -1 EINVAL (Invalid argument)
> >> cannot map archive header: Invalid argument
> >>

For writable mmap to work you need to mount with -o cache=loose. Did
you try local-gen with that mount option ?


> >> c) package files containing device nodes fail (maybe this is expected
> >> ...); specifically `/lib/udev/devices/loop0`
> >>
> >

Does this mean mknod fails for you. Or is something else in package
manager causing the failure ? 


> > Is this with 9p2000.L ?. What is the guest kernel version ?
> 
> (not sure if list will accept this ... too much traffic! had to remove myself)
> 
> yes this is with 9p2000.L, both host and guests run kernel 3.2.5.  i'm
> happy to provide/try additional information/tests if useful.
> 

One quick thing you could do is to try latest linus kernel as the guest
kernel. 

> ... is there really no chance of upping the max path?  seems like
> config space will be a big constraint, forever :-(
> 
> and i'm very much willing to do additional testing for the other
> issues as well (i had to revert to qemu-as-root to get passthru
> working 100% on rootfs ... ldconfig is kind of critical :-).  are
> these known issues?
> 

I don't have much suggestion on what could be going wrong there. I will
try to reproduce the ldconfig issue here.

-aneesh




Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2012-02-21 Thread C Anthony Risinger
On Sat, Feb 18, 2012 at 11:38 AM, Aneesh Kumar K.V
 wrote:
> On Thu, 16 Feb 2012 06:20:21 -0600, C Anthony Risinger  
> wrote:
>> a) mapped FS security policy (xattrs) causes `ldconfig` to abort()?
>> root or normal user ...
>>
>> somehow `ldconfig` gets a duplicate inode while constructing the
>> cache, even though it already de-duped (confirmed via gdb and grep --
>> only a single abort() in the source)
>>
>> b) unable to run `locale-gen` on *any* virtfs configuration? (strace)
>>
>> [...]
>> mmap(NULL, 536870912, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x7fb3aac63000
>> mmap(0x7fb3aac63000, 103860, PROT_READ|PROT_WRITE,
>> MAP_SHARED|MAP_FIXED, 3, 0) = -1 EINVAL (Invalid argument)
>> cannot map archive header: Invalid argument
>>
>> c) package files containing device nodes fail (maybe this is expected
>> ...); specifically `/lib/udev/devices/loop0`
>>
>
> Is this with 9p2000.L ?. What is the guest kernel version ?

(not sure if list will accept this ... too much traffic! had to remove myself)

yes this is with 9p2000.L, both host and guests run kernel 3.2.5.  i'm
happy to provide/try additional information/tests if useful.

... is there really no chance of upping the max path?  seems like
config space will be a big constraint, forever :-(

and i'm very much willing to do additional testing for the other
issues as well (i had to revert to qemu-as-root to get passthru
working 100% on rootfs ... ldconfig is kind of critical :-).  are
these known issues?

-- 

C Anthony



Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2012-02-18 Thread Aneesh Kumar K.V
On Thu, 16 Feb 2012 06:20:21 -0600, C Anthony Risinger  wrote:
> a) mapped FS security policy (xattrs) causes `ldconfig` to abort()?
> root or normal user ...
> 
> somehow `ldconfig` gets a duplicate inode while constructing the
> cache, even though it already de-duped (confirmed via gdb and grep --
> only a single abort() in the source)
> 
> b) unable to run `locale-gen` on *any* virtfs configuration? (strace)
> 
> [...]
> mmap(NULL, 536870912, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x7fb3aac63000
> mmap(0x7fb3aac63000, 103860, PROT_READ|PROT_WRITE,
> MAP_SHARED|MAP_FIXED, 3, 0) = -1 EINVAL (Invalid argument)
> cannot map archive header: Invalid argument
> 
> c) package files containing device nodes fail (maybe this is expected
> ...); specifically `/lib/udev/devices/loop0`
> 

Is this with 9p2000.L ?. What is the guest kernel version ?

-aneesh




Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2012-02-16 Thread Paul Brook
> i see an error message has been added, which is great (i killed a
> couple hours of $%!@ until i noticed the truncated length was *exactly
> 32* bytes; silent truncation), but it would really be great if this
> restriction could be lifted, or at least mitigated by expanding the
> field some.
> 
> is config space that precious?  

Yes.

> what constrains it (personal curiosity :-)?

Virtio PCI devices map the config space directly onto an ISA IO port range, 
along with the virtio control structure.  On most systems (in particular 
anything PC based) this is a 16-bit address space.  i.e. you have 
substantually less than 64k of virtio config space for the whole machine, and 
potentially only a couple of kbytes for a given PCI bus.

Paul



Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2012-02-16 Thread C Anthony Risinger
if this doesn't thread correctly ... RE:

http://lists.gnu.org/archive/html/qemu-devel/2011-09/msg03694.html

On Thu, 29 Sep 2011 16:45:37 +0100, Daniel P. Berrange wrote:
> On Thu, Sep 29, 2011 at 08:52:14PM +0530, Aneesh Kumar K.V wrote:
>> On Wed, 28 Sep 2011 16:18:07 +0100, Daniel P. Berrange wrote:
>>> On Wed, Sep 28, 2011 at 05:22:06PM +0530, Harsh Bora wrote:
 On 09/22/2011 11:12 PM, Daniel P. Berrange wrote:
> I've noticed that if you use a virtio 9p filesystem with a mount_tag
> property value that is longer than 32 bytes, it gets silently truncated.
>
> In virtio-9p-device.c
>
> len = strlen(conf->tag);
> if (len>  MAX_TAG_LEN) {
> len = MAX_TAG_LEN;

 I think its better to return here with a failure message saying
 mount_tag too long. IIUC, The 32 byte limit has been kept because of
 understanding that mount_tag is a device name in guest (and not a
 path location).
>>>
>>> While I appreciate the fact that 'mount_tag' is not required to be
>>> a path name, so you can allow symbolic naming for exports, in some
>>> use cases it is important / significantly simpler to be able to just
>>> set a path name. I don't think we should mandate symbolic naming,
>>> or path based naming - we should just allow users to choose which
>>> best suits their needs.
>>>
>>> For example, I am building appliances which have multiple 9p devices
>>> exported to the guest. These 9p filesystems are all mounted by the
>>> 'init' process in the initrd. If I'm forced to use symbolic naming
>>> for devices, it means I need to create a custom initrd for every
>>> appliance configuration I have (many many many of them), with the
>>> init process knowing how to map from symbolic names back to the
>>> mount paths I actually want. If I can just use a path for the
>>> mount_tag, then one single initrd can be used for all my appliances.
>>>
>>> So I really would like 'mount_tag' to be significantly larger up to
>>> at least 255 bytes, or more.

32 bytes is very small! barely one UUID sans dashes.  i have exact
same use case ...

>> Will you not be able to have well defined mount tags, that map these
>> directories. I guess we don't want to claim 255 bytes out of config
>> space for mount tag. That is one of the reason it is limited to 32
>> bytes.
>
> The reason for using paths instead of symbolic names in the
> mount tag is because the guest code does not know what paths it
> might be asked to mount at runtime. Symbolic names in the mount
> tags are only useful if the guest can be told ahead of time about
> a finite set of tag -> path mappings, which is not a reasonable
> assumption in general.

i use UUIDs for everything ... ~2x the 32byte limit.  since i also
wanted a generic initramfs handler (Archlinux), i ended up working
around the issue by opening a virtio serial channel and marshaling the
information that way ... kind of hacky, but it does work.

i see an error message has been added, which is great (i killed a
couple hours of $%!@ until i noticed the truncated length was *exactly
32* bytes; silent truncation), but it would really be great if this
restriction could be lifted, or at least mitigated by expanding the
field some.

is config space that precious?  what constrains it (personal curiosity :-)?

PS: off-topic, but since i'm here ... (qemu-discuss is busted --
majordomo replies to everyone even though it's a mailman list --
unless this was fixed in last month)

a) mapped FS security policy (xattrs) causes `ldconfig` to abort()?
root or normal user ...

somehow `ldconfig` gets a duplicate inode while constructing the
cache, even though it already de-duped (confirmed via gdb and grep --
only a single abort() in the source)

b) unable to run `locale-gen` on *any* virtfs configuration? (strace)

[...]
mmap(NULL, 536870912, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7fb3aac63000
mmap(0x7fb3aac63000, 103860, PROT_READ|PROT_WRITE,
MAP_SHARED|MAP_FIXED, 3, 0) = -1 EINVAL (Invalid argument)
cannot map archive header: Invalid argument

c) package files containing device nodes fail (maybe this is expected
...); specifically `/lib/udev/devices/loop0`

-- 

C Anthony



Re: [Qemu-devel] virtio-9p compiling error

2011-11-29 Thread erik . rull
My host system is Debian 4.0
My compiler is gcc (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)

Due to other hardware constraints I'm forced to this version as build machine.

If you need more information, just let me know.

Best regards,

Erik





Re: [Qemu-devel] virtio-9p compiling error

2011-11-29 Thread M. Mohan Kumar
Could you please give your host information? Such as gcc version, distro 
version etc?

I could compile in my Fedora 15 x86-64 system using gcc 4.6.0
-- 
Regards,
M. Mohan Kumar

On Tuesday, November 29, 2011 06:27:00 PM erik.r...@rdsoftware.de wrote:
> Hi all,
> 
> when compiling the 1.0-rc4 I get the following error.
> 0.14.0-kvm and 0.15.0-kvm were fine, I found no configure switch
> possibility to disable this code part. I really don't need it.
> 
> Please help here:
> 
>   CClibhw64/9pfs/virtio-9p.o
>   CClibhw64/9pfs/virtio-9p-local.o
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-local.c: In function
> 'local_init': /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-local.c:721:
> warning: unused variable 'stbuf'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-local.c: At top level:
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-local.c:694: warning:
> 'local_ioc_getversion' defined but not used
>   CClibhw64/9pfs/virtio-9p-xattr.o
>   CClibhw64/9pfs/virtio-9p-xattr-user.o
>   CClibhw64/9pfs/virtio-9p-posix-acl.o
>   CClibhw64/9pfs/virtio-9p-coth.o
>   CClibhw64/9pfs/cofs.o
>   CClibhw64/9pfs/codir.o
>   CClibhw64/9pfs/cofile.o
>   CClibhw64/9pfs/coxattr.o
>   CClibhw64/9pfs/virtio-9p-handle.o
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_update_file_cred':
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:95: warning: implicit
> declaration of function 'openat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:95: warning: nested
> extern declaration of 'openat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:103: warning: implicit
> declaration of function 'fchownat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:103: warning: nested
> extern declaration of 'fchownat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_lstat': /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:120:
> warning: implicit declaration of function 'fstatat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:120: warning: nested
> extern declaration of 'fstatat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_readlink':
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:135: warning: implicit
> declaration of function 'readlinkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:135: warning: nested
> extern declaration of 'readlinkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_opendir':
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:167: warning: implicit
> declaration of function 'fdopendir'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:167: warning: nested
> extern declaration of 'fdopendir'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:167: warning: assignment
> makes pointer from integer without a cast
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_mknod': /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:265:
> warning: implicit declaration of function 'mknodat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:265: warning: nested
> extern declaration of 'mknodat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_mkdir': /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:283:
> warning: implicit declaration of function 'mkdirat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:283: warning: nested
> extern declaration of 'mkdirat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_symlink':
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:333: warning: implicit
> declaration of function 'symlinkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:333: warning: nested
> extern declaration of 'symlinkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_link': /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:363:
> warning: implicit declaration of function 'linkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:363: warning: nested
> extern declaration of 'linkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_renameat':
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:570: warning: implicit
> declaration of function 'renameat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:570: warning: nested
> extern declaration of 'renameat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_unlinkat':
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:593: warning: implicit
> declaration of function 'unlinkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:593: warning: nested
> extern declaration of 'unlinkat'
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c: In function
> 'handle_ioc_getversion':
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:616: error:
> 'FS_IOC_GETVERSION' undeclared (first use in this function)
> /home/erik/qemu-1.0-rc4/hw/9pfs/virtio-9p-handle.c:616: error: (Each
> undeclared identifier is reported only once
> /hom

Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2011-09-29 Thread Daniel P. Berrange
On Thu, Sep 29, 2011 at 08:52:14PM +0530, Aneesh Kumar K.V wrote:
> On Wed, 28 Sep 2011 16:18:07 +0100, "Daniel P. Berrange" 
>  wrote:
> > On Wed, Sep 28, 2011 at 05:22:06PM +0530, Harsh Bora wrote:
> > > On 09/22/2011 11:12 PM, Daniel P. Berrange wrote:
> > > >I've noticed that if you use a virtio 9p filesystem with a mount_tag
> > > >property value that is longer than 32 bytes, it gets silently truncated.
> > > >
> > > >In virtio-9p-device.c
> > > >
> > > > len = strlen(conf->tag);
> > > > if (len>  MAX_TAG_LEN) {
> > > > len = MAX_TAG_LEN;
> > > 
> > > I think its better to return here with a failure message saying
> > > mount_tag too long. IIUC, The 32 byte limit has been kept because of
> > > understanding that mount_tag is a device name in guest (and not a
> > > path location).
> > 
> > While I appreciate the fact that 'mount_tag' is not required to be
> > a path name, so you can allow symbolic naming for exports, in some
> > use cases it is important / significantly simpler to be able to just
> > set a path name. I don't think we should mandate symbolic naming,
> > or path based naming - we should just allow users to choose which
> > best suits their needs.
> > 
> > For example, I am building appliances which have multiple 9p devices
> > exported to the guest. These 9p filesystems are all mounted by the
> > 'init' process in the initrd. If I'm forced to use symbolic naming
> > for devices, it means I need to create a custom initrd for every
> > appliance configuration I have (many many many of them), with the
> > init process knowing how to map from symbolic names back to the
> > mount paths I actually want. If I can just use a path for the
> > mount_tag, then one single initrd can be used for all my appliances.
> > 
> > So I really would like 'mount_tag' to be significantly larger upto
> > at least 255 bytes, or more.
> > 
> 
> Will you not be able to have well defined mount tags, that map these
> directories. I guess we don't want to claim 255 bytes out of config
> space for mount tag. That is one of the reason it is limited to 32
> bytes.

The reason for using paths instead of symbolic names in the
mount tag is because the guest code does not know what paths it
might be asked to mount at runtime. Symbolic names in the mount
tags are only useful if the guest can be told ahead of time about
a finite set of tag -> path mappings, which is not a reasonable
assumption in general.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2011-09-29 Thread Aneesh Kumar K.V
On Wed, 28 Sep 2011 16:18:07 +0100, "Daniel P. Berrange"  
wrote:
> On Wed, Sep 28, 2011 at 05:22:06PM +0530, Harsh Bora wrote:
> > On 09/22/2011 11:12 PM, Daniel P. Berrange wrote:
> > >I've noticed that if you use a virtio 9p filesystem with a mount_tag
> > >property value that is longer than 32 bytes, it gets silently truncated.
> > >
> > >In virtio-9p-device.c
> > >
> > > len = strlen(conf->tag);
> > > if (len>  MAX_TAG_LEN) {
> > > len = MAX_TAG_LEN;
> > 
> > I think its better to return here with a failure message saying
> > mount_tag too long. IIUC, The 32 byte limit has been kept because of
> > understanding that mount_tag is a device name in guest (and not a
> > path location).
> 
> While I appreciate the fact that 'mount_tag' is not required to be
> a path name, so you can allow symbolic naming for exports, in some
> use cases it is important / significantly simpler to be able to just
> set a path name. I don't think we should mandate symbolic naming,
> or path based naming - we should just allow users to choose which
> best suits their needs.
> 
> For example, I am building appliances which have multiple 9p devices
> exported to the guest. These 9p filesystems are all mounted by the
> 'init' process in the initrd. If I'm forced to use symbolic naming
> for devices, it means I need to create a custom initrd for every
> appliance configuration I have (many many many of them), with the
> init process knowing how to map from symbolic names back to the
> mount paths I actually want. If I can just use a path for the
> mount_tag, then one single initrd can be used for all my appliances.
> 
> So I really would like 'mount_tag' to be significantly larger upto
> at least 255 bytes, or more.
> 

Will you not be able to have well defined mount tags, that map these
directories. I guess we don't want to claim 255 bytes out of config
space for mount tag. That is one of the reason it is limited to 32
bytes.

-aneesh



Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2011-09-28 Thread Daniel P. Berrange
On Wed, Sep 28, 2011 at 05:22:06PM +0530, Harsh Bora wrote:
> On 09/22/2011 11:12 PM, Daniel P. Berrange wrote:
> >I've noticed that if you use a virtio 9p filesystem with a mount_tag
> >property value that is longer than 32 bytes, it gets silently truncated.
> >
> >In virtio-9p-device.c
> >
> > len = strlen(conf->tag);
> > if (len>  MAX_TAG_LEN) {
> > len = MAX_TAG_LEN;
> 
> I think its better to return here with a failure message saying
> mount_tag too long. IIUC, The 32 byte limit has been kept because of
> understanding that mount_tag is a device name in guest (and not a
> path location).

While I appreciate the fact that 'mount_tag' is not required to be
a path name, so you can allow symbolic naming for exports, in some
use cases it is important / significantly simpler to be able to just
set a path name. I don't think we should mandate symbolic naming,
or path based naming - we should just allow users to choose which
best suits their needs.

For example, I am building appliances which have multiple 9p devices
exported to the guest. These 9p filesystems are all mounted by the
'init' process in the initrd. If I'm forced to use symbolic naming
for devices, it means I need to create a custom initrd for every
appliance configuration I have (many many many of them), with the
init process knowing how to map from symbolic names back to the
mount paths I actually want. If I can just use a path for the
mount_tag, then one single initrd can be used for all my appliances.

So I really would like 'mount_tag' to be significantly larger upto
at least 255 bytes, or more.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] VirtIO 9p mount_tag (bogus?) limit of 32 bytes

2011-09-28 Thread Harsh Bora

On 09/22/2011 11:12 PM, Daniel P. Berrange wrote:

I've noticed that if you use a virtio 9p filesystem with a mount_tag
property value that is longer than 32 bytes, it gets silently truncated.

In virtio-9p-device.c

 len = strlen(conf->tag);
 if (len>  MAX_TAG_LEN) {
 len = MAX_TAG_LEN;


I think its better to return here with a failure message saying 
mount_tag too long. IIUC, The 32 byte limit has been kept because of 
understanding that mount_tag is a device name in guest (and not a path 
location).


Aneesh, any inputs ?

- Harsh


 }


The header  virtio-9p.h contains


   /* from Linux's linux/virtio_9p.h */

   /* The ID for virtio console */
   #define VIRTIO_ID_9P9
   #define MAX_REQ 128
   #define MAX_TAG_LEN 32


The Linux kernel's  virtio_9p.h, however, does not have any MAX_TAG_LEN
constant and AFAICT the code in Linux's net/9p/trans_virtio.c is not
placing any 32 byte length restriction on the mount tag.

So is this QEMU length limit legacy code that can be removed ?

If using the mount_tag to specify the desired guest mount location path,
then 32 bytes is really quite limiting - a good 255 bytes is much more
desirable.

Finally, regardless of what limit is imposed, it would be better to
return an error if the user attempts to specify an excessively long
mount tag, rather than truncate it breaking the guest app relying on
the full tag.

Regards,
Daniel





Re: [Qemu-devel] virtio-9p is not working

2010-07-22 Thread Aneesh Kumar K. V
On Wed, 21 Jul 2010 17:27:47 +0900, Dallas Lee  wrote:
> Hi,
> 
> I have trying to use the virtio-9p for my linux in QEMU, but without
> success.
> 
> Here is my option for booting my qemu:
> i386-softmmu/qemu -kernel bzImage -append "console=ttyS0
> video=uvesafb:ywrap,overlay:rgb16,480x800...@60 root=/dev/nfs rw
> nfsroot=10.0.2.2:/root,udp ip=10.0.2.16:eth0:none 5" -net
>  nic,model=virtio -net user -soundhw all -usb -serial
> telnet:localhost:1200,server -vga std -m 512 -L ./pc-bios -bios bios.bin
> -virtfs
> local,path=/home/dallas/nfs,security_model=passthrough,mount_tag=v_tmp
> 
> The virtio network is working, I could mount the nfs through virio net.
> 
> And in the guest linux, I tried to mount v9fs by using following command:
> mount -t 9p -o trans=virtio -o debug=0x v_tmp /mnt
> 
> but unfortunately I got the error:
> mount: mounting v_tmp on /mnt failed: No such device
> 
> And I can't find the v_tmp neither in /sys/devices/virtio-pci/virtio1/ nor
> in /sys/bus/virtio/drivers/9pnet_virtio/virtio1/

You need to have libattr dev package to get virtio enabled

-aneesh




Re: [Qemu-devel] virtio-9p is not working

2010-07-21 Thread Venkateswararao Jujjuri (JV)
Cam Macdonell wrote:
> On Wed, Jul 21, 2010 at 2:27 AM, Dallas Lee  wrote:
>> Hi,
>> I have trying to use the virtio-9p for my linux in QEMU, but without
>> success.
>> Here is my option for booting my qemu:
>> i386-softmmu/qemu -kernel bzImage -append "console=ttyS0
>> video=uvesafb:ywrap,overlay:rgb16,480x800...@60 root=/dev/nfs rw
>> nfsroot=10.0.2.2:/root,udp ip=10.0.2.16:eth0:none 5" -net
>>  nic,model=virtio -net user -soundhw all -usb -serial
>> telnet:localhost:1200,server -vga std -m 512 -L ./pc-bios -bios bios.bin
>> -virtfs
>> local,path=/home/dallas/nfs,security_model=passthrough,mount_tag=v_tmp
>>
>> The virtio network is working, I could mount the nfs through virio net.
>> And in the guest linux, I tried to mount v9fs by using following command:
>> mount -t 9p -o trans=virtio -o debug=0x v_tmp /mnt
>> but unfortunately I got the error:
>> mount: mounting v_tmp on /mnt failed: No such device
>> And I can't find the v_tmp neither in /sys/devices/virtio-pci/virtio1/ nor
>> in /sys/bus/virtio/drivers/9pnet_virtio/virtio1/
>> And before building the kernel, I enabled the Plan 9 Ressource Sharing
>> Support under File System/Network File System, I also enabled the following
>> configures:
>> PARAVIRT_GUEST:
>> -> Processor type and features
>> -> Paravirtualized guest support
>> LGUEST_GUEST:
>> -> Processor type and features
>> -> Paravirtualized guest support
>> -> Lguest guest support
>> VIRTIO_PCI:
>> -> Virtualization (VIRTUALIZATION [=y])
>> -> PCI driver for virtio devices
>> VIRTIO_BLK:
>> -> Device Drivers
>> -> Block devices (BLK_DEV [=y])
>> -> Virtio block driver
>> VIRTIO_NET:
>> -> Device Drivers
>> -> Network device support (NETDEVICES [=y])
>> -> Virtio network driver
>> Would you please help me to find out the problem why I couldn't mount the
>> v9fs?
>> Thank you very much!
>> BR,
>> Dallas
> 
> Hi Dallas,
> 
> what does 'lspci -vv' in the guest show?  Is there a device for virtio_9p?
> 
> do you have
> 
> CONFIG_NET_9P_VIRTIO=y
> 
> in your kernel's .config?

Can you please check if /proc/filesystems has 9P in it?

Check if you have '9' in any of the following files

cat /sys/devices/virtio-pci/virtio*/device

Also in that directory you will have a "mount_tag" file.

If any of the above are missing, please make sure that you have following
options enabled
in the guest kernel config.

CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_NET_9P_DEBUG=y
CONFIG_9P_FS=y


Thanks,
JV




> 
> Cam
> 





Re: [Qemu-devel] virtio-9p is not working

2010-07-21 Thread Cam Macdonell
On Wed, Jul 21, 2010 at 2:27 AM, Dallas Lee  wrote:
> Hi,
> I have trying to use the virtio-9p for my linux in QEMU, but without
> success.
> Here is my option for booting my qemu:
> i386-softmmu/qemu -kernel bzImage -append "console=ttyS0
> video=uvesafb:ywrap,overlay:rgb16,480x800...@60 root=/dev/nfs rw
> nfsroot=10.0.2.2:/root,udp ip=10.0.2.16:eth0:none 5" -net
>  nic,model=virtio -net user -soundhw all -usb -serial
> telnet:localhost:1200,server -vga std -m 512 -L ./pc-bios -bios bios.bin
> -virtfs
> local,path=/home/dallas/nfs,security_model=passthrough,mount_tag=v_tmp
>
> The virtio network is working, I could mount the nfs through virio net.
> And in the guest linux, I tried to mount v9fs by using following command:
> mount -t 9p -o trans=virtio -o debug=0x v_tmp /mnt
> but unfortunately I got the error:
> mount: mounting v_tmp on /mnt failed: No such device
> And I can't find the v_tmp neither in /sys/devices/virtio-pci/virtio1/ nor
> in /sys/bus/virtio/drivers/9pnet_virtio/virtio1/
> And before building the kernel, I enabled the Plan 9 Ressource Sharing
> Support under File System/Network File System, I also enabled the following
> configures:
> PARAVIRT_GUEST:
>         -> Processor type and features
>                 -> Paravirtualized guest support
> LGUEST_GUEST:
>         -> Processor type and features
>                 -> Paravirtualized guest support
>                         -> Lguest guest support
> VIRTIO_PCI:
>         -> Virtualization (VIRTUALIZATION [=y])
>                 -> PCI driver for virtio devices
> VIRTIO_BLK:
>         -> Device Drivers
>                 -> Block devices (BLK_DEV [=y])
>                         -> Virtio block driver
> VIRTIO_NET:
>         -> Device Drivers
>                 -> Network device support (NETDEVICES [=y])
>                         -> Virtio network driver
> Would you please help me to find out the problem why I couldn't mount the
> v9fs?
> Thank you very much!
> BR,
> Dallas

Hi Dallas,

what does 'lspci -vv' in the guest show?  Is there a device for virtio_9p?

do you have

CONFIG_NET_9P_VIRTIO=y

in your kernel's .config?

Cam