[libvirt] [PATCH] virsh: Parse --mode readonly, shareable in virsh attach-disk.

2014-06-16 Thread Dongsheng Yang
We can not attach a disk with mode of both readonly and shareable.
Before:
# virsh attach-disk --domain jeos --source attach.img --target vdb 
--persistent --mode readonly --mode shareable
error: option --mode already seen
# virsh attach-disk --domain jeos --source attach.img --target vdb 
--persistent --mode readonly --shareable
error: option --mode already seen
# virsh attach-disk --domain jeos --source attach.img --target vdb 
--persistent --mode readonly,shareable
error: No support for readonly,shareable in command 'attach-disk'
After:
# virsh attach-disk --domain jeos --source attach.img --target vdb 
--persistent --mode readonly,shareable
Disk attached successfully

Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
 tools/virsh-domain.c | 25 +
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 6b3dd70..62a9824 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -515,6 +515,7 @@ cmdAttachDisk(vshControl *ctl, const vshCmd *cmd)
 bool config = vshCommandOptBool(cmd, config);
 bool live = vshCommandOptBool(cmd, live);
 bool persistent = vshCommandOptBool(cmd, persistent);
+char **modes = NULL, **tmp;
 
 VSH_EXCLUSIVE_OPTIONS_VAR(persistent, current);
 
@@ -553,12 +554,9 @@ cmdAttachDisk(vshControl *ctl, const vshCmd *cmd)
 goto cleanup;
 }
 
-if (mode) {
-if (STRNEQ(mode, readonly)  STRNEQ(mode, shareable)) {
-vshError(ctl, _(No support for %s in command 'attach-disk'),
- mode);
-goto cleanup;
-}
+if (mode  !(modes = virStringSplit(mode, ,, 0))) {
+vshError(ctl, %s, _(Cannot parse mode string));
+goto cleanup;
 }
 
 if (wwn  !virValidateWWN(wwn))
@@ -591,8 +589,19 @@ cmdAttachDisk(vshControl *ctl, const vshCmd *cmd)
 virBufferAsprintf(buf, source %s='%s'/\n,
   isFile ? file : dev, source);
 virBufferAsprintf(buf, target dev='%s'/\n, target);
-if (mode)
-virBufferAsprintf(buf, %s/\n, mode);
+
+tmp = modes;
+while (tmp  *tmp) {
+mode = *tmp;
+if (STREQ(mode, readonly) || STREQ(mode, shareable)) {
+virBufferAsprintf(buf, %s/\n, mode);
+} else {
+vshError(ctl, _(Unknown mode %s value, expecting 
+'readonly', 'shareable'), mode);
+goto cleanup;
+}
+tmp++;
+}
 
 if (serial)
 virBufferAsprintf(buf, serial%s/serial\n, serial);
-- 
1.8.3.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 2/3] qemu_driver: clamp value for setting sched cpu_shares with --config.

2014-05-15 Thread Dongsheng Yang
As shown in 'man virsh' about schedinfo:

   Note: The cpu_shares parameter has a valid value range of 0-262144;
   Negative values are wrapped to positive, and larger values are 
capped at
   the maximum.  Therefore, -1 is a useful shorthand for 262144.
   On the Linux kernel, the values 0 and 1 are automatically converted 
to
   a minimal value of 2.
it works well with --live, but not with --config.

Example:
# virsh schedinfo rhel7-ful --set cpu_shares=0 --config
Scheduler  : posix
cpu_shares : 0
vcpu_period: 0
vcpu_quota : 0
emulator_period: 0
emulator_quota : 0
cpu_shares is 0 rather than expected 2.

What's worse, when we start it again, it is the default value of
cpu_shares 1024.

Because when we set the value of cpu_shares, when flag is --live,
write the value into cgroup/cpu.shares. Then it will convert the
value into the range of [2, 262144]. When flag is --config, we
set the value into vmdef immidiately and 0 means no settting for
cpu_shares. When we start vm again, libvirt use default value(1024)
for it.

This patch clamp the cpu_shares value when flag is --config, then
we will get then correct settting in output of virsh schedinfo
and value in cgroup after next booting of vm.

Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
 src/qemu/qemu_driver.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 52ca47c..7648865 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -112,6 +112,8 @@ VIR_LOG_INIT(qemu.qemu_driver);
 #define QEMU_SCHED_MAX_PERIOD   100LL
 #define QEMU_SCHED_MIN_QUOTA   1000LL
 #define QEMU_SCHED_MAX_QUOTA  18446744073709551LL
+#define QEMU_SCHED_MIN_SHARES 2LL
+#define QEMU_SCHED_MAX_SHARES262144LL
 
 #if HAVE_LINUX_KVM_H
 # include linux/kvm.h
@@ -9063,7 +9065,9 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr dom,
 }
 
 if (flags  VIR_DOMAIN_AFFECT_CONFIG) {
-vmdef-cputune.shares = value_ul;
+vmdef-cputune.shares = CLAMP(value_ul,
+  QEMU_SCHED_MIN_SHARES,
+  QEMU_SCHED_MAX_SHARES);
 vmdef-cputune.sharesSpecified = true;
 }
 
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 3/3] {qemu|test}_driver: Use CLAMP macro to clamp value.

2014-05-15 Thread Dongsheng Yang
As we have CLAMP macro in util.h, we should use it to replace
the open implementations of it.

Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
 src/qemu/qemu_driver.c | 8 ++--
 src/test/test_driver.c | 7 ++-
 2 files changed, 4 insertions(+), 11 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7648865..96f325d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -4547,9 +4547,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
 if (maxcpu  hostcpus)
 maxcpu = hostcpus;
 
-/* Clamp to actual number of vcpus */
-if (ncpumaps  targetDef-vcpus)
-ncpumaps = targetDef-vcpus;
+ncpumaps = CLAMP(ncpumaps, ncpumaps, targetDef-vcpus);
 
 if (ncpumaps  1) {
 goto cleanup;
@@ -4857,9 +4855,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
 if (maxcpu  hostcpus)
 maxcpu = hostcpus;
 
-/* Clamp to actual number of vcpus */
-if (maxinfo  priv-nvcpupids)
-maxinfo = priv-nvcpupids;
+maxinfo = CLAMP(maxinfo, maxinfo, priv-nvcpupids);
 
 if (maxinfo = 1) {
 if (info != NULL) {
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 37756e7..024f63d 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -2773,12 +2773,9 @@ static int testDomainGetVcpus(virDomainPtr domain,
 
 hostcpus = VIR_NODEINFO_MAXCPUS(privconn-nodeInfo);
 maxcpu = maplen * 8;
-if (maxcpu  hostcpus)
-maxcpu = hostcpus;
 
-/* Clamp to actual number of vcpus */
-if (maxinfo  privdom-def-vcpus)
-maxinfo = privdom-def-vcpus;
+maxcpu = CLAMP(maxcpu, maxcpu, hostcpus);
+maxinfo = CLAMP(maxinfo, maxinfo, privdom-def-vcpus);
 
 /* Populate virVcpuInfo structures */
 if (info != NULL) {
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 1/3] virutil: Introduce a new macro named CLAMP.

2014-05-15 Thread Dongsheng Yang
This patch introduce a new macro to return a
value clamped to a given range.

Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
 src/util/virutil.h | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/util/virutil.h b/src/util/virutil.h
index 2bb74e2..e8536d8 100644
--- a/src/util/virutil.h
+++ b/src/util/virutil.h
@@ -37,6 +37,12 @@
 # ifndef MAX
 #  define MAX(a, b) ((a)  (b) ? (a) : (b))
 # endif
+# ifndef CLAMP
+#  define CLAMP(v, min, max) ({ \
+typeof(v) _v = v;   \
+_v = _v  min ? min: _v;\
+_v  max ? max: _v; })
+# endif
 
 int virSetBlocking(int fd, bool blocking) ATTRIBUTE_RETURN_CHECK;
 int virSetNonBlock(int fd) ATTRIBUTE_RETURN_CHECK;
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] {qemu|test}_driver: Use CLAMP macro to clamp value.

2014-05-15 Thread Dongsheng Yang

On 05/15/2014 08:37 PM, Martin Kletzander wrote:

On Thu, May 15, 2014 at 06:39:51PM +0900, Dongsheng Yang wrote:

As we have CLAMP macro in util.h, we should use it to replace
the open implementations of it.

Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
src/qemu/qemu_driver.c | 8 ++--
src/test/test_driver.c | 7 ++-
2 files changed, 4 insertions(+), 11 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7648865..96f325d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -4547,9 +4547,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
if (maxcpu  hostcpus)
maxcpu = hostcpus;

-/* Clamp to actual number of vcpus */
-if (ncpumaps  targetDef-vcpus)
-ncpumaps = targetDef-vcpus;
+ncpumaps = CLAMP(ncpumaps, ncpumaps, targetDef-vcpus);



Useless comparison to self; I'd suggest using MIN() instead.  This
applies to the patch.


Oops, did not realize that. Just found the key word of Clamp in 
comment and

replaced it with CLAMP.

So it is not related with the CLAMP, and I will sent a v2 of this patch 
independently.


Thanx.


Martin


if (ncpumaps  1) {
goto cleanup;
@@ -4857,9 +4855,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
if (maxcpu  hostcpus)
maxcpu = hostcpus;

-/* Clamp to actual number of vcpus */
-if (maxinfo  priv-nvcpupids)
-maxinfo = priv-nvcpupids;
+maxinfo = CLAMP(maxinfo, maxinfo, priv-nvcpupids);

if (maxinfo = 1) {
if (info != NULL) {
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 37756e7..024f63d 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -2773,12 +2773,9 @@ static int testDomainGetVcpus(virDomainPtr 
domain,


hostcpus = VIR_NODEINFO_MAXCPUS(privconn-nodeInfo);
maxcpu = maplen * 8;
-if (maxcpu  hostcpus)
-maxcpu = hostcpus;

-/* Clamp to actual number of vcpus */
-if (maxinfo  privdom-def-vcpus)
-maxinfo = privdom-def-vcpus;
+maxcpu = CLAMP(maxcpu, maxcpu, hostcpus);
+maxinfo = CLAMP(maxinfo, maxinfo, privdom-def-vcpus);

/* Populate virVcpuInfo structures */
if (info != NULL) {
--
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 2/3] qemu_driver: clamp value for setting sched cpu_shares with --config.

2014-05-15 Thread Dongsheng Yang

On 05/15/2014 08:56 PM, Martin Kletzander wrote:

On Thu, May 15, 2014 at 06:39:50PM +0900, Dongsheng Yang wrote:

As shown in 'man virsh' about schedinfo:

  Note: The cpu_shares parameter has a valid value range of 
0-262144;
   Negative values are wrapped to positive, and larger values are 
capped at

  the maximum.  Therefore, -1 is a useful shorthand for 262144.
   On the Linux kernel, the values 0 and 1 are automatically 
converted to

   a minimal value of 2.
it works well with --live, but not with --config.

Example:
# virsh schedinfo rhel7-ful --set cpu_shares=0 --config
Scheduler  : posix
cpu_shares : 0
vcpu_period: 0
vcpu_quota : 0
emulator_period: 0
emulator_quota : 0
cpu_shares is 0 rather than expected 2.

What's worse, when we start it again, it is the default value of
cpu_shares 1024.

Because when we set the value of cpu_shares, when flag is --live,
write the value into cgroup/cpu.shares. Then it will convert the
value into the range of [2, 262144]. When flag is --config, we
set the value into vmdef immidiately and 0 means no settting for
cpu_shares. When we start vm again, libvirt use default value(1024)
for it.

This patch clamp the cpu_shares value when flag is --config, then
we will get then correct settting in output of virsh schedinfo
and value in cgroup after next booting of vm.



I was under the impression that this was meant to be fixed by commit
97814d8a, what's the difference to that?


Commit 97814d8a is to make the output of virsh schedinfo --live showing
the real value in cgroup.

This patch here is to do another thing, it is about --config. When we 
use --live

to set cpu_shares, as shown in man virsh, we depend on the conversion by
cgroup. But when we set it with --config, we did not actually write the 
value
into cgroup, then the value will not be converted to the expected value 
in manpage.


And as I described above, when we set cpu_shares=0 with --config, we 
will get

1024 in next booting.




Martin


Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
src/qemu/qemu_driver.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 52ca47c..7648865 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -112,6 +112,8 @@ VIR_LOG_INIT(qemu.qemu_driver);
#define QEMU_SCHED_MAX_PERIOD   100LL
#define QEMU_SCHED_MIN_QUOTA   1000LL
#define QEMU_SCHED_MAX_QUOTA  18446744073709551LL
+#define QEMU_SCHED_MIN_SHARES 2LL
+#define QEMU_SCHED_MAX_SHARES262144LL

#if HAVE_LINUX_KVM_H
# include linux/kvm.h
@@ -9063,7 +9065,9 @@ 
qemuDomainSetSchedulerParametersFlags(virDomainPtr dom,

}

if (flags  VIR_DOMAIN_AFFECT_CONFIG) {
-vmdef-cputune.shares = value_ul;
+vmdef-cputune.shares = CLAMP(value_ul,
+ QEMU_SCHED_MIN_SHARES,
+ QEMU_SCHED_MAX_SHARES);
vmdef-cputune.sharesSpecified = true;
}

--
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 1/3] virutil: Introduce a new macro named CLAMP.

2014-05-15 Thread Dongsheng Yang

On 05/15/2014 08:53 PM, Martin Kletzander wrote:

On Thu, May 15, 2014 at 06:39:49PM +0900, Dongsheng Yang wrote:

This patch introduce a new macro to return a
value clamped to a given range.

Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
src/util/virutil.h | 6 ++
1 file changed, 6 insertions(+)

diff --git a/src/util/virutil.h b/src/util/virutil.h
index 2bb74e2..e8536d8 100644
--- a/src/util/virutil.h
+++ b/src/util/virutil.h
@@ -37,6 +37,12 @@
# ifndef MAX
#  define MAX(a, b) ((a)  (b) ? (a) : (b))
# endif
+# ifndef CLAMP
+#  define CLAMP(v, min, max) ({ \
+typeof(v) _v = v;   \
+_v = _v  min ? min: _v;\
+_v  max ? max: _v; })
+# endif



It's just my subjective impression, but wouldn't the following be a
bit more readable and less obfuscated?

#define CLAMP(v, min, max) MAX(MIN(v, max), min)


Neat! I think it works. I will use it in v2.

Thanx


Martin.


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] cleanup: clamp max info with MIN().

2014-05-15 Thread Dongsheng Yang
Rather than using a open coded implementation,
this patch use MIN macro to clamp infomation
to allowed maxmum.

Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
 src/qemu/qemu_driver.c | 18 ++
 src/test/test_driver.c | 10 +++---
 2 files changed, 9 insertions(+), 19 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 52ca47c..b3782dd 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -4542,12 +4542,9 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
 goto cleanup;
 
 maxcpu = maplen * 8;
-if (maxcpu  hostcpus)
-maxcpu = hostcpus;
 
-/* Clamp to actual number of vcpus */
-if (ncpumaps  targetDef-vcpus)
-ncpumaps = targetDef-vcpus;
+maxcpu = MIN(maxcpu, hostcpus);
+ncpumaps = MIN(ncpumaps, targetDef-vcpus);
 
 if (ncpumaps  1) {
 goto cleanup;
@@ -4786,8 +4783,8 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom,
 goto cleanup;
 
 maxcpu = maplen * 8;
-if (maxcpu  hostcpus)
-maxcpu = hostcpus;
+
+maxcpu = MIN(maxcpu, hostcpus);
 
 /* initialize cpumaps */
 memset(cpumaps, 0xff, maplen);
@@ -4852,12 +4849,9 @@ qemuDomainGetVcpus(virDomainPtr dom,
 goto cleanup;
 
 maxcpu = maplen * 8;
-if (maxcpu  hostcpus)
-maxcpu = hostcpus;
 
-/* Clamp to actual number of vcpus */
-if (maxinfo  priv-nvcpupids)
-maxinfo = priv-nvcpupids;
+maxcpu = MIN(maxcpu, hostcpus);
+maxinfo = MIN(maxinfo, priv-nvcpupids);
 
 if (maxinfo = 1) {
 if (info != NULL) {
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 37756e7..4e591f2 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -2773,12 +2773,9 @@ static int testDomainGetVcpus(virDomainPtr domain,
 
 hostcpus = VIR_NODEINFO_MAXCPUS(privconn-nodeInfo);
 maxcpu = maplen * 8;
-if (maxcpu  hostcpus)
-maxcpu = hostcpus;
 
-/* Clamp to actual number of vcpus */
-if (maxinfo  privdom-def-vcpus)
-maxinfo = privdom-def-vcpus;
+maxcpu = MIN(maxcpu, hostcpus);
+maxinfo = MIN(maxinfo, privdom-def-vcpus);
 
 /* Populate virVcpuInfo structures */
 if (info != NULL) {
@@ -2858,8 +2855,7 @@ static int testDomainPinVcpu(virDomainPtr domain,
 privmaplen = VIR_CPU_MAPLEN(hostcpus);
 
 maxcpu = maplen * 8;
-if (maxcpu  hostcpus)
-maxcpu = hostcpus;
+maxcpu = MIN(maxcpu, hostcpus);
 
 privcpumap = VIR_GET_CPUMAP(privdomdata-cpumaps, privmaplen, vcpu);
 memset(privcpumap, 0, privmaplen);
-- 
1.8.2.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 2/3] qemu_driver: clamp value for setting sched cpu_shares with --config.

2014-05-15 Thread Dongsheng Yang

On 05/15/2014 09:11 PM, Ján Tomko wrote:

On 05/15/2014 11:39 AM, Dongsheng Yang wrote:

As shown in 'man virsh' about schedinfo:

Note: The cpu_shares parameter has a valid value range of 0-262144;
   Negative values are wrapped to positive, and larger values are 
capped at
the maximum.  Therefore, -1 is a useful shorthand for 262144.
   On the Linux kernel, the values 0 and 1 are automatically converted 
to
   a minimal value of 2.
it works well with --live, but not with --config.

Example:
# virsh schedinfo rhel7-ful --set cpu_shares=0 --config
Scheduler  : posix
cpu_shares : 0
vcpu_period: 0
vcpu_quota : 0
emulator_period: 0
emulator_quota : 0
cpu_shares is 0 rather than expected 2.

What's worse, when we start it again, it is the default value of
cpu_shares 1024.

Because when we set the value of cpu_shares, when flag is --live,
write the value into cgroup/cpu.shares. Then it will convert the
value into the range of [2, 262144]. When flag is --config, we
set the value into vmdef immidiately and 0 means no settting for
cpu_shares. When we start vm again, libvirt use default value(1024)
for it.

commit bdffab0d5c52d31bd71422b0b69665efb6650953
Author: Ján Tomko jto...@redhat.com
CommitDate: 2014-03-26 10:10:02 +0100

 Treat zero cpu shares as a valid value

changed this behavior.

After this commit, if you set cpu_shares to 0 via schedinfo --config, the 0
will be written in the XML and used on domain startup.


Indeed , it works.


To use the default value, the shares element needs to be removed from the
XML before restarting the domain, this cannot be currently done via 'virsh
schedinfo'.


And yes, my patch here is attempting to do the similar work in 'virsh 
schedinfo'.


Also, it can make the other invalid input, such as -1 and 262144+1, 
get the

expected output as manpage described.


Jan




--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 1/3] virutil: Introduce a new macro named CLAMP.

2014-05-15 Thread Dongsheng Yang

On 05/15/2014 09:14 PM, Eric Blake wrote:

On 05/15/2014 03:39 AM, Dongsheng Yang wrote:

This patch introduce a new macro to return a
value clamped to a given range.

[when sending a series, it's nice to include a cover letter with 'git
send-email --cover-letter to generate the 0/N message that all other
messages in the series reply to]


Okey, Thanx :)



Signed-off-by: Dongsheng Yang yangds.f...@cn.fujitsu.com
---
  src/util/virutil.h | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/src/util/virutil.h b/src/util/virutil.h
index 2bb74e2..e8536d8 100644
--- a/src/util/virutil.h
+++ b/src/util/virutil.h
@@ -37,6 +37,12 @@
  # ifndef MAX
  #  define MAX(a, b) ((a)  (b) ? (a) : (b))
  # endif
+# ifndef CLAMP
+#  define CLAMP(v, min, max) ({ \

This is gcc-specific.  I'd rather avoid it, and stick to portable C99
code, if possible - which means doing this as an inline function rather
than a macro.


I prefer inline function too, but I found MAX and MIN are implemented 
with macro, then

appended CLAMP to them.

Okey, I will use inline function in next version if this patch is 
acceptable.


Thanx




--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 2/3] qemu_driver: clamp value for setting sched cpu_shares with --config.

2014-05-15 Thread Dongsheng Yang

On 05/15/2014 09:17 PM, Eric Blake wrote:

On 05/15/2014 03:39 AM, Dongsheng Yang wrote:

As shown in 'man virsh' about schedinfo:

Note: The cpu_shares parameter has a valid value range of 0-262144;

This note documents historical kernel limits; if the kernel has changed,
this may be out of date.



...



+++ b/src/qemu/qemu_driver.c
@@ -112,6 +112,8 @@ VIR_LOG_INIT(qemu.qemu_driver);
  #define QEMU_SCHED_MAX_PERIOD   100LL
  #define QEMU_SCHED_MIN_QUOTA   1000LL
  #define QEMU_SCHED_MAX_QUOTA  18446744073709551LL
+#define QEMU_SCHED_MIN_SHARES 2LL
+#define QEMU_SCHED_MAX_SHARES262144LL

I'm a bit reluctant to add these numbers - if the kernel ever changes
its range again (which HAS happened for some cgroup tunables), then we
are needlessly preventing use of the newer range.


Yes, I hate these numbers too. But the range is defined in 
kernel/sched/sched.h


#define MIN_SHARES  (1UL   1)
#define MAX_SHARES  (1UL  18)

and used in scheduler.

shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));

So we can not access it out from kernel.

As I found there was some numbers for period and quota here, I added
the shares number here.




--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 2/3] qemu_driver: clamp value for setting sched cpu_shares with --config.

2014-05-15 Thread Dongsheng Yang

On 05/15/2014 09:20 PM, Eric Blake wrote:

On 05/15/2014 05:18 AM, Dongsheng Yang wrote:


Also, it can make the other invalid input, such as -1 and 262144+1,

Auto-wrapping -1 to maximum makes sense.

Actually, -1 is not the minmum-1, because the range is [2, 262144].
The reason -1 is converted to maxmum is that -1 is treated as unsigned
long of 2^64-1. Then clamp it to range is 262144.

But making other out-of-bounds
values, like 262144+1, be auto-clamped sounds risky, especially if the
kernel ever changes clamping boundaries.


There are two approaches for this issue I think.
(1), use SCHED_RANGE_CHECK() for cpu_shares, same with period and 
quota, then

if the value is our of the range, raise an error.
(2), keep the behavior for out-of-bounds values in --config as what 
it is in --live. --live
is depending on the conversion of cgroup, then we should follow the 
style of cgroup in

--config too, right? It means 262144+1 should clamped to maxmum.

About the concern you mentioned that boundaries may be changed in 
future, as I explained
in another mail in this thread, it is defined in private zone of 
scheduler, I can not catch up

with a good idea to solve it. :(

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [bug] State of in shutdown.

2013-09-17 Thread Dongsheng Yang

Hi experts:
I got the following description by man virsh:
```
STATES

shutdown
   The domain is in the process of shutting down, i.e. 
the guest

operating system has been notified and should be in the process of
stopping its operations gracefully.
```
But I found the state of VM is running even when it is in process of 
shutting down.


e.g:
# virsh list
 IdName   State

 41virt-tests-vm1 running
# virsh shutdown virt-tests-vm1;echo $?;virsh domstate virt-tests-vm1 
--reason

Domain virt-tests-vm1 is being shutdown

0
running (booted)
# virsh domstate virt-tests-vm1 --reason
running (booted)
... ...
# virsh domstate virt-tests-vm1 --reason
shut off (shutdown)

As shown above, I can not get a shutdown (or in shutdown) for the VM 
state.

Do you think it is a bug, or something not good enough?

Wish your help!!

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [bug] State of in shutdown.

2013-09-17 Thread Dongsheng Yang
Well, I write a script to catch the state of VM from running to shut 
off.

Ok, there is a short time before shutoff, VM is in shutdown.

So, please ignore my report. :)

On 09/17/2013 02:02 PM, Dongsheng Yang wrote:

Hi experts:
I got the following description by man virsh:
```
STATES

shutdown
   The domain is in the process of shutting down, i.e. 
the guest

operating system has been notified and should be in the process of
stopping its operations gracefully.
```
But I found the state of VM is running even when it is in process of 
shutting down.


e.g:
# virsh list
 IdName   State

 41virt-tests-vm1 running
# virsh shutdown virt-tests-vm1;echo $?;virsh domstate virt-tests-vm1 
--reason

Domain virt-tests-vm1 is being shutdown

0
running (booted)
# virsh domstate virt-tests-vm1 --reason
running (booted)
... ...
# virsh domstate virt-tests-vm1 --reason
shut off (shutdown)

As shown above, I can not get a shutdown (or in shutdown) for the 
VM state.

Do you think it is a bug, or something not good enough?

Wish your help!!



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] virsh: move command maxvcpus from domain group to host group.

2013-09-15 Thread Dongsheng Yang

ping

On 09/09/2013 10:30 AM, Osier Yang wrote:

On 09/09/13 10:14, yangdongsheng wrote:

Since the maxvcpus command query the maximum number of virtual
CPUs supported for a guest VM on this connection, it should be
in virsh-host.c but not virsh-domain.c.

Signed-off-by: yangdongshengyangds.f...@cn.fujitsu.com
---
  tools/virsh-domain.c |   44 

  tools/virsh-host.c   |   44 


  2 files changed, 44 insertions(+), 44 deletions(-)


ACK.



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list