Re: [Xen-devel] [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler

2015-05-10 Thread Jan Beulich
>>> On 11.05.15 at 00:04,  wrote:
> On Fri, May 8, 2015 at 2:49 AM, Jan Beulich  wrote:
>> >>> On 07.05.15 at 19:05,  wrote:
>> > @@ -1110,6 +1113,67 @@ rt_dom_cntl(
>> >  }
>> >  spin_unlock_irqrestore(&prv->lock, flags);
>> >  break;
>> > +case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> > +op->u.rtds.nr_vcpus = 0;
>> > +spin_lock_irqsave(&prv->lock, flags);
>> > +list_for_each( iter, &sdom->vcpu )
>> > +vcpu_index++;
>> > +spin_unlock_irqrestore(&prv->lock, flags);
>> > +op->u.rtds.nr_vcpus = vcpu_index;
>>
>> Does dropping of the lock here and re-acquiring it below really work
>> race free?
>>
> 
> Here, the lock is used in the same way as the ones in the two cases
> above (XEN_DOMCTL_SCHEDOP_get/putinfo). So I think if race free
> is guaranteed in that two cases, the lock in this case works race free
> as well.

No - the difference is that in the {get,put}info cases it is being
acquired just once each.

>> > +vcpu_index = 0;
>> > +spin_lock_irqsave(&prv->lock, flags);
>> > +list_for_each( iter, &sdom->vcpu )
>> > +{
>> > +struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
>> sdom_elem);
>> > +
>> > +local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
>> > +local_sched[vcpu_index].period = svc->period / MICROSECS(1);
>> > +local_sched[vcpu_index].index = vcpu_index;
>>
>> What use is this index to the caller? I think you rather want to tell it
>> the vCPU number. That's especially also taking the use case of a
>> get/set pair into account - unless you tell me that these indexes can
>> never change, the indexes passed back into the set operation would
>> risk to have become stale by the time the hypervisor processes the
>> request.
>>
> 
> I don't quite understand what the "stale" means. The array here
> (local_sched[ ])
> and the array (in libxc) that local_sched[ ] is copied to are both used for
> this get
> operation only. When users set per-vcpu parameters, there are also
> dedicated
> arrays for that set operation.

Just clarify this for me (and maybe yourself): Is the vCPU number
<-> vcpu_index mapping invariable for the lifetime of a domain?
If it isn't, the vCPU for a particular vcpu_index during a "get"
may be different from that for the same vcpu_index during a
subsequent "set".

>> > +if( local_sched == NULL )
>> > +{
>> > +return -ENOMEM;
>> > +}
>> > +copy_from_guest(local_sched, op->u.rtds.vcpus,
>> op->u.rtds.nr_vcpus);
>> > +
>> > +for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
>> > +{
>> > +vcpu_index = 0;
>> > +spin_lock_irqsave(&prv->lock, flags);
>> > +list_for_each( iter, &sdom->vcpu )
>> > +{
>> > +struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
>> sdom_elem);
>> > +if ( local_sched[i].index == vcpu_index )
>> > +{
>> > +if ( local_sched[i].period <= 0 ||
>> local_sched[i].budget <= 0 )
>> > + return -EINVAL;
>> > +
>> > +svc->period = MICROSECS(local_sched[i].period);
>> > +svc->budget = MICROSECS(local_sched[i].budget);
>> > +break;
>> > +}
>> > +vcpu_index++;
>> > +}
>> > +spin_unlock_irqrestore(&prv->lock, flags);
>> > +}
>>
>> Considering a maximum size guest, these two nested loops could
>> require a couple of million iterations. That's too much without any
>> preemption checks in the middle.
>>
> 
> The section protected by the lock is only the "list_for_each" loop, whose
> running time is limited by the number of vcpus of a domain (32 at most).

Since when is 32 the limit on the number of vCPU-s in a domain?

> If this does cause problems, I think adding a "hypercall_preempt_check()"
> at the outside "for" loop may help. Is that right?

Yes.

>> > --- a/xen/common/schedule.c
>> > +++ b/xen/common/schedule.c
>> > @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d, struct
>> xen_domctl_scheduler_op *op)
>> >
>> >  if ( (op->sched_id != DOM2OP(d)->sched_id) ||
>> >   ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
>> > -  (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
>> > +  (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
>> > +  (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
>> > +  (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )
>>
>> Imo this should become a switch now.
>>
> 
> Do you mean "switch ( op->cmd )" ? I'm afraid that would make it look more
> complicated.

This may be a matter of taste to a certain degree, but I personally
don't think a series of four almost identical comparisons reads any
better than its switch() replacement. But it being a style issue, the
ultimate decision is with George as the maintainer anyway.

Jan

___

Re: [Xen-devel] [PATCH] IOMMU/x86: avoid pages without GFN in page table creation/updating

2015-05-10 Thread Zhang, Yang Z
Jan Beulich wrote on 2015-05-11:
 On 11.05.15 at 04:53,  wrote:
>> Jan Beulich wrote on 2015-05-07:
>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -59,10 +59,17 @@ int arch_iommu_populate_page_table(struc
>>>  if ( has_hvm_container_domain(d) ||
>>>  (page->u.inuse.type_info & PGT_type_mask) ==
>>> PGT_writable_page )
>>>  {
>>> -BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page; - 
>>>   rc = hd->platform_ops->map_page( -d,
>>> mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page), -   
>>> IOMMUF_readable|IOMMUF_writable); +unsigned long mfn =
>>> page_to_mfn(page); +unsigned long gfn = mfn_to_gmfn(d,
>>> mfn); + +if ( gfn != INVALID_MFN ) +{ +   
>>> ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH)); + 
>>>   BUG_ON(SHARED_M2P(gfn));
>> 
>> It seems ASSERT() is unnecessary. BUG_ON() is enough to cover it.
> 
> The two check completely different things, so I don't see how the
> BUG_ON() would help with out of bounds, yet also not INVALID_MFN GFNs.
> Please clarify what you mean here.

You are right. I misread the code as BUG_ON(!SHARED_M2P(gfn)).

For Intel VT-d part:

Acked-by: Yang Zhang 

> 
> Jan


Best regards,
Yang



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V3] x86/cpuidle: get accurate C0 value with xenpm tool

2015-05-10 Thread Han, Huaitong

On Fri, 2015-05-08 at 11:11 +0100, Jan Beulich wrote:
> >>> On 08.05.15 at 11:40,  wrote:
> > All comments has been addressed, just changelog is written partly.
> 
> Certainly not. There are still hard tabs in the patch, and there was
> still a pointless initializer that I had pointed out before. I didn't look
> further.
> 
Yes,there are still hard tabs and pointless initializers to be modified.
Sorry for my mistakes. But which tabs should be used when I modify the
file like  "xen/arch/x86/cpu/mwait-idle.c" that all use hard tabs
instead of soft tabs? the source code in the files inside directory
"xen/arch/x86/cpu/" all use hard tabs.
Thanks

> Jan
> 
> > On Fri, 2015-05-08 at 09:35 +0100, Jan Beulich wrote:
> >> >>> On 08.05.15 at 10:11,  wrote:
> >> > When checking the ACPI funciton of C-status, after 100 seconds sleep,
> >> > the sampling value of C0 status from the xenpm tool decreases.
> >> > Because C0=NOW()-C1-C2-C3-C4, when NOW() value is during idle time,
> >> > NOW() value is bigger than last C-status update time, and C0 value
> >> > is also bigger than ture value. if margin of the second error cannot
> >> > make up for margin of the first error, the value of C0 would decrease.
> >> > 
> >> > Signed-off-by: Huaitong Han 
> >> 
> >> Please address all comments on the previous iteration before
> >> re-submitting.
> >> 
> >> Jan
> >> 
> 
> 
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 2/2] iommu: add rmrr Xen command line option for misc rmrrs

2015-05-10 Thread Jan Beulich
>>> On 08.05.15 at 22:51,  wrote:
> On Thu, May 07, 2015 at 06:47:12AM +0100, Jan Beulich wrote:
>> >>> Elena Ufimtseva  05/06/15 6:50 PM >>>
>> >In your proposed second valid case, segment 0001 should be assumed for  
>> > 
>> >second device as its own segment is not specified? 
>> 
>> Exactly. Or (less optimal for the user, but easier to implement) require the 
>> segment
>> number to repeated when non-zero (i.e. only adjust the documentation part 
>> here).
> 
> You mean that when segment is non-zero, just specify it explicitly even
> when duplicating seg number from first sbdf?
> like this:
> rmrr==0001:bb:dd.f,0001:bb:dd.f

Yes.

> We can use more general approach when adding extra RMRRs. 
> I cannot find from Intel spec if one RMRR region cannot be shared
> among multiple devices from multiple segments.

This may not be explicitly stated, but is necessarily implied from
the data organization in the ACPI tables: Just check where
segment numbers get conveyed, and compare with where the
BDFs get specified.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH Remus v2 00/10] Remus support for Migration-v2

2015-05-10 Thread Hongyang Yang

On 05/09/2015 02:12 AM, Andrew Cooper wrote:

On 08/05/15 10:33, Yang Hongyang wrote:

This patchset implement the Remus support for Migration v2 but without
memory compressing.

The series can be found on github:
https://github.com/macrosheep/xen/tree/Remus-newmig-v2

PATCH 1-7: Some refactor and prepare work.
PATCH 8-9: The main Remus loop implement.
PATCH 10: Fix for Remus.


I have reviewed the other half of the series now, and have some design
to discuss.  (I was hoping to get this email sent in reply to v1, but
never mind).  This largely concerns patch 7 and onwards.

Migration v2 has substantially more structure than legacy did.  Once
issue so far is that your series relies on using more than one END
record, which is not supported in the spec.  (Of course - the spec is
fine to be extended in forward-compatible ways.)


I use END record as a info that indicate the end of the stream. I saw
that you add a checkpoint record in your v2 series of Remus related patches,
I can use that record to indicate the end of the checkpointed stream, but
I think the record better to be called as end-of-checkpoint?



To fix the qemu layering issues I need to have some explicit negotiation
between libxc and libxl about sharing ownership of the input fd.  This
is going to require a new record in the format, and I currently drafting
a patch or two which should help in this regard.

My view for the eventual stream looks something like this (time going
downwards):

libxc writes:   libxl writes:

Image Header
Domain Header
start_of_stream()
start_of_checkpoint()




ctx->save.callbacks->suspend()
this callback suspend the primary guest and then calls Remus devices
postsuspend callbacks to buffer the network pkts etc.




end_of_checkpoint()
Checkpoint record


ctx->save.callbacks->postcopy()
this callback should not be omitted, it do some necessary work before resume
primary (such as call Remus devices preresume callbacks to ensure the disk
data is consistent) and then resume the primary guest. I think this
callback should be renamed to ctx->save.callbacks->resume().


 ctx->save.callbacks->checkpoint()
 libxl qemu record


Maybe we should add another callback to send qemu record instead of
using checkpoint callback. We can call it ctx->save.callbacks->save_qemu()
Then in checkpoint callback, we only call remus devices commit callbacks(
which will release the network buffer etc...) then decide whether we need to
do another checkpoint or quit checkpointed stream.
With Remus, checkpoint callback only wait for 200ms(can be specified by -i)
then return.
With COLO, checkpoint callback will ask COLO proxy if we need to do a
checkpoint, will return when COLO proxy module indicate a checkpoint is needed.


 ...
 libxl end-of-checkpoint record
 ctx->save.callbacks->checkpoint() returns
start_of_checkpoint()


ctx->save.callbacks->suspend()



end_of_checkpoint()
Checkpoint record
etc...

This will eventually allow both libxc and libxl to send checkpoint data
(and by the looks of it, remove the need for postcopy()).  With this
libxc/remus work it is fine to use XG_LIBXL_HVM_COMPAT to cover the
current qemu situation, but I would prefer not to be also retrofitting
libxc checkpoint records when doing the libxl/migv2 work.

Does this look plausible in for Remus (and eventually COLO) support?


With comments above, I would suggest the save flow as below:

libxc writes:   libxl writes:

live migration:
Image Header
Domain Header
start_of_stream()
start_of_checkpoint()

ctx->save.callbacks->suspend()

end_of_checkpoint()
if ( checkpointd )
  End of Checkpoint record
  /*If resotre side receives this record, input fd should be handed to libxl*/
else
  goto end

loop of checkpointed stream:
ctx->save.callbacks->resume()
ctx->save.callbacks->save_qemu()
libxl qemu record
...
libxl end-of-checkpoint record
/*If resotre side receives this record, input fd should be handed to libxc*/
ctx->save.callbacks->save_qemu() returns
ctx->save.callbacks->checkpoint()
start_of_checkpoint()
ctx->save.callbacks->suspend()

end_of_checkpoint()
End of Checkpoint record
goto 'loop of checkpointed stream'

end:
END record
/*If resotre side receives this record, input fd should be handed to libxl*/


In order to keep it simple, we can keep the current 
ctx->save.callbacks->checkpoint() as it is, which do the save_qemu thing, call

Remus devices commit callbacks and then decide whether we need a checkpoint. We
can also combine the ctx->save.callbacks->resume() with
ctx->save.callbacks->checkpoint(), with only one checkpoint() callback, we do
the following things:
 - Call Remus devices preresume callbacks
 - Resume the primary
 - Save qemu records
 - Call Remus devices commit callbacks
 - Decide whether we need a checkpo

Re: [Xen-devel] [RFC][PATCH 04/13] tools/libxl: detect and avoid conflicts with RDM

2015-05-10 Thread Chen, Tiejun

On 2015/5/8 23:13, Wei Liu wrote:

On Fri, May 08, 2015 at 09:24:56AM +0800, Chen, Tiejun wrote:

Campbell, Jackson, Wei and Stefano,

Any consideration?

I can follow up Jan's idea but I need you guys make sure I'm going to do
this properly.



Look at my earlier reply.


Thanks for your reply. And lets discuss this directly in your reply.

Tiejun



1. This function seems to have bug.
2. Caller should have allocated enough memory so that this function
don't have to.

Looks like we don't need it, at least not in its current form.

Wei.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] FreeBSD Dom0 IOMMU issues (resent)

2015-05-10 Thread Chen, Tiejun

On 2015/5/8 13:21, Michael Dexter wrote:

On 5/7/15 7:59 PM, Chen, Tiejun wrote:

Are you running IGD passthrough with guest OS?


Only as far as the PVH Xen kernel is passing through all hardware to
Dom0. Roger can elaborate as needed.


What is your CPU? BDW? HSW? And what is your FreeBSD Linux version on
Dom0 side? I just think you can directly try the latest upstream Linux
as Dom0, because I see so many messages indicating GPU hang.


My output has dmesg information for this purpose. Most output is from:

CPU: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz (2790.93-MHz K8-class CPU)
Lenovo ThinkPad T420

The second system is:

CPU: Intel(R) Core(TM) i5-2400S CPU @ 2.50GHz (2500.02-MHz K8-class CPU)
Intel DQ67EP Mini-ITX board

Both systems are recent FreeBSD 11 snapshots, the Dom0:


Please dump something by perform `cat /proc/cpuinfo`.



FreeBSD 11.0-CURRENT #0 r282110: Mon Apr 27 20:49:15 UTC 2015

The DomU:

FreeBSD 11.0-CURRENT #0 r280862: Mon Mar 30 20:15:11 UTC 2015

I have not tried GNU/Linux but could if you like. Is there a preferred
distribution for use with Xen?


So maybe you need to upgrade DRM/I915 driver firstly.


Any and all help from Intel at keeping the FreeBSD graphics drivers up
to date is appreciated. FreeBSD 11 HEAD represents the latest drivers. I
believe I have tried this under NVidia with similar results. I can run a


Do you mean you can see "... GPU hang" as well?

I just want to narrow down our scope because in any case GPU shouldn't 
constantly hang.


Thanks
Tiejun


test if you like.

All the best,

Michael Dexter





___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC][PATCH 03/13] tools/libxc: Expose new hypercall xc_reserved_device_memory_map

2015-05-10 Thread Chen, Tiejun

On 2015/5/8 21:07, Wei Liu wrote:

On Fri, Apr 10, 2015 at 05:21:54PM +0800, Tiejun Chen wrote:

We will introduce the hypercall xc_reserved_device_memory_map
approach to libxc. This helps us get rdm entry info according to
different parameters. If flag == PCI_DEV_RDM_ALL, all entries
should be exposed. Or we just expose that rdm entry specific to
a SBDF.

Signed-off-by: Tiejun Chen 


This patch contains a wrapper to the new hypercall. If the HV guys are
happy with this I'm fine with it too.



Who should be pinged in this case?

Thanks
Tiejun

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC][PATCH 01/13] tools: introduce some new parameters to set rdm policy

2015-05-10 Thread Chen, Tiejun

On 2015/5/8 21:04, Wei Liu wrote:

Sorry for the late review.



Really thanks for taking your time :)


On Fri, Apr 10, 2015 at 05:21:52PM +0800, Tiejun Chen wrote:

This patch introduces user configurable parameters to specify RDM
resource and according policies,

Global RDM parameter:
 rdm = [ 'host, reserve=force/try' ]
Per-device RDM parameter:
 pci = [ 'sbdf, rdm_reserve=force/try' ]

Global RDM parameter allows user to specify reserved regions explicitly,
e.g. using 'host' to include all reserved regions reported on this platform
which is good to handle hotplug scenario. In the future this parameter
may be further extended to allow specifying random regions, e.g. even
those belonging to another platform as a preparation for live migration
with passthrough devices.

'force/try' policy decides how to handle conflict when reserving RDM
regions in pfn space. If conflict exists, 'force' means an immediate error
so VM will be killed, while 'try' allows moving forward with a warning
message thrown out.

Default per-device RDM policy is 'force', while default global RDM policy
is 'try'. When both policies are specified on a given region, 'force' is
always preferred.

Signed-off-by: Tiejun Chen 
---
  docs/man/xl.cfg.pod.5   | 44 +
  docs/misc/vtd.txt   | 34 
  tools/libxl/libxl_create.c  |  5 +++
  tools/libxl/libxl_types.idl | 18 +++
  tools/libxl/libxlu_pci.c| 78 +
  tools/libxl/libxlutil.h |  4 +++
  tools/libxl/xl_cmdimpl.c| 21 +++-
  7 files changed, 203 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 408653f..9ed3055 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -583,6 +583,36 @@ assigned slave device.

  =back

+=item B
+


Shouldn't this be "TYPE,RDM_RESERVE_STRIGN" according to your commit
message? If the only available config is just one string, you probably
don't need a list for this?


Yes, based on that design we don't need a list. So

=item B




+(HVM/x86 only) Specifies the information about Reserved Device Memory (RDM),
+which is necessary to enable robust device passthrough usage. One example of
+RDM is reported through ACPI Reserved Memory Region Reporting (RMRR)
+structure on x86 platform.
+Each B has the form C<["TYPE",KEY=VALUE,KEY=VALUE,...> where:
+


RDM_CHECK_STRING?


And here should be corrected like this,

B has the form ...




+=over 4
+
+=item B<"TYPE">
+
+Currently we just have one type. 'host' means all reserved device memory on
+this platform should be reserved in this VM's pfn space.
+


What are other possible types? If there is only one type then we can


Currently we just have one type and looks that design doesn't make this 
clear.



simply ignore the type?


I just think we may introduce something else specific to live migration 
in the future... But I'm really not sure right now.





+=item B
+
+Possible Bs are:
+
+=over 4
+
+=item B
+
+Conflict may be detected when reserving reserved device memory in gfn space.
+'force' means an unsolved conflict leads to immediate VM destroy, while


Do you mean "immediate VM crash"?


Yes. So I guess I should replace this.




+'try' allows VM moving forward with a warning message thrown out. 'try'
+is default.


Can you please your double quotes for "force", "try" etc.


Sure. Just note we'd like to use "strict"/"relaxed" to replace 
"force"/"try" from next revision according to Jan's suggestion.





+
+Note this may be overrided by another sub item, rdm_reserve, in pci device.
+
  =item B

  Specifies the host PCI devices to passthrough to this guest. Each 
B
@@ -645,6 +675,20 @@ dom0 without confirmation.  Please use with care.
  D0-D3hot power management states for the PCI device. False (0) by
  default.

+=item B
+
+(HVM/x86 only) Specifies the information about Reserved Device Memory (RDM),
+which is necessary to enable robust device passthrough usage. One example of
+RDM is reported through ACPI Reserved Memory Region Reporting (RMRR)
+structure on x86 platform.
+
+Conflict may be detected when reserving reserved device memory in gfn space.
+'force' means an unsolved conflict leads to immediate VM destroy, while
+'try' allows VM moving forward with a warning message thrown out. 'force'
+is default.
+
+Note this would override another global item, rdm = [''].
+


Note this would override global B option.


Fixed.




  =back

  =back
diff --git a/docs/misc/vtd.txt b/docs/misc/vtd.txt
index 9af0e99..d7434d6 100644
--- a/docs/misc/vtd.txt
+++ b/docs/misc/vtd.txt
@@ -111,6 +111,40 @@ in the config file:
  To override for a specific device:
pci = [ '01:00.0,msitranslate=0', '03:00.0' ]

+RDM, 'reserved device memory', for PCI Device Passthrough
+-
+
+There are some devices the BIOS controls, for e.g. USB devices to perform
+PS2 emulation. The regions of memory u

Re: [Xen-devel] [edk2] Windows 8 64bit Guest BSOD when using OVMF to boot and *install* from CDROM(when configured with more than 4G memory)

2015-05-10 Thread Wei Liu
Please don't top post.

On Mon, May 11, 2015 at 01:52:38AM +, lidonglin wrote:
> Hi all:
>   Did you encounter this BSOD problem ? I try
>   git://xenbits.xen.org/ovmf.git and other ovmf repos. I found
>   installing win8 64bit os (memory > 4G) with OVMF on xen
>   hypervisor always failed with BSOD. I also tried to do some
>   thing on kvm, there is no this problem. 

We (the open source team at Citrix) don't test Windows 8. We test Linux
with  more than 4GB of memory in our test system.

If you find problem with Linux setup we can help you with that.  We
would like to make Windows 8 work, but we have neither the time to debug
nor the license required to run Windows 8.

If you can ask  more specific questions we can also try to answer them.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-3.14 test] 54831: regressions - trouble: broken/fail/pass

2015-05-10 Thread osstest service user
flight 54831 linux-3.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/54831/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386 15 guest-localmigrate.2 fail in 53933 REGR. vs. 
36608

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ovmf-amd64  3 host-install(3)broken pass in 53933
 test-amd64-i386-rumpuserxen-i386 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail in 53933 pass in 54831

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt  6 xen-boot  fail REGR. vs. 36608
 test-armhf-armhf-xl-sedf-pin  6 xen-boot  fail REGR. vs. 36608
 test-armhf-armhf-xl-sedf  6 xen-boot  fail REGR. vs. 36608
 test-armhf-armhf-xl-arndale 4 host-ping-check-native fail in 53933 blocked in 
36608
 test-amd64-i386-freebsd10-amd64 13 guest-localmigrate fail in 53933 like 
53148-bisect
 test-amd64-i386-freebsd10-i386 13 guest-localmigratefail like 53128-bisect
 test-armhf-armhf-xl-multivcpu  6 xen-boot   fail like 53493-bisect
 test-armhf-armhf-xl-credit2   6 xen-bootfail like 53717-bisect
 test-armhf-armhf-xl   6 xen-bootfail like 53729-bisect
 test-amd64-i386-freebsd10-amd64 15 guest-localmigrate.2 fail like 53812-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-xsm   11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-libvirt-xsm 11 guest-start  fail   never pass
 test-amd64-amd64-xl-xsm  11 guest-start  fail   never pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  11 guest-start  fail   never pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail   never pass
 test-armhf-armhf-xl-xsm   6 xen-boot fail   never pass
 test-armhf-armhf-xl-cubietruck  6 xen-boot fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-i386-xl-qemut-winxpsp3 16 guest-stopfail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3 16 guest-stopfail never pass

version targeted for testing:
 linux99e64c4a808c55cb173b69dc21d28a4420eb22c5
baseline version:
 linux8a5f782c33c04ea5c9b3ca6fb32d6039e2e5c0c9


308 people touched revisions under test,
not listing them all


jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  pass
 test-armhf

Re: [Xen-devel] [PATCH] IOMMU/x86: avoid pages without GFN in page table creation/updating

2015-05-10 Thread Zhang, Yang Z
Jan Beulich wrote on 2015-05-07:
> Handing INVALID_GFN to functions like hd->platform_ops->map_page() just
> can't do any good, and the ioreq server code results in such pages being on 
> the
> list of ones owned by a guest.
> 
> While - as suggested by Tim - we should use get_gfn()/put_gfn() there to
> eliminate races, we really can't due to holding the domain's page alloc lock.
> Ultimately arch_iommu_populate_page_table() may need to be switched to be
> GFN based. Here is what Tim said in this regard:
> "Ideally this loop would be iterating over all gfns in the p2m rather  than 
> over
> all owned MFNs.  As long as needs_iommu gets set first,  such a loop could
> safely be paused and restarted without worrying  about concurrent updates.
> The code sould even stay in this file,  though exposing an iterator from the
> p2m code would be a lot more  efficient."
> 
> Original by Andrew Cooper , using further
> suggestions from Tim Deegan .
> 
> Reported-by: Sander Eikelenboom 
> Signed-off-by: Jan Beulich 
> Tested-by: Sander Eikelenboom 
> Acked-by: Tim Deegan 
> 
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -557,6 +557,10 @@ static int update_paging_mode(struct dom
>  unsigned long old_root_mfn;
>  struct hvm_iommu *hd = domain_hvm_iommu(d);
> 
> +if ( gfn == INVALID_MFN )
> +return -EADDRNOTAVAIL;
> +ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> +
>  level = hd->arch.paging_mode;
>  old_root = hd->arch.root_table;
>  offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1)); @@ -729,12
> +733,15 @@ int amd_iommu_unmap_page(struct domain *
>   * we might need a deeper page table for lager gfn now */
>  if ( is_hvm_domain(d) )
>  {
> -if ( update_paging_mode(d, gfn) )
> +int rc = update_paging_mode(d, gfn);
> +
> +if ( rc )
>  {
>  spin_unlock(&hd->arch.mapping_lock);
>  AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n",
> gfn);
> -domain_crash(d);
> -return -EFAULT;
> +if ( rc != -EADDRNOTAVAIL )
> +domain_crash(d);
> +return rc;
>  }
>  }
> 
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -482,7 +482,6 @@ struct qinval_entry {  #define
> VTD_PAGE_TABLE_LEVEL_3  3  #define VTD_PAGE_TABLE_LEVEL_4  4
> 
> -#define DEFAULT_DOMAIN_ADDRESS_WIDTH 48  #define
> MAX_IOMMU_REGS 0xc0
> 
>  extern struct list_head acpi_drhd_units;
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -59,10 +59,17 @@ int arch_iommu_populate_page_table(struc
>  if ( has_hvm_container_domain(d) ||
>  (page->u.inuse.type_info & PGT_type_mask) ==
> PGT_writable_page )
>  {
> -BUG_ON(SHARED_M2P(mfn_to_gmfn(d,
> page_to_mfn(page;
> -rc = hd->platform_ops->map_page(
> -d, mfn_to_gmfn(d, page_to_mfn(page)),
> page_to_mfn(page),
> -IOMMUF_readable|IOMMUF_writable);
> +unsigned long mfn = page_to_mfn(page);
> +unsigned long gfn = mfn_to_gmfn(d, mfn);
> +
> +if ( gfn != INVALID_MFN )
> +{
> +ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> +BUG_ON(SHARED_M2P(gfn));

It seems ASSERT() is unnecessary. BUG_ON() is enough to cover it.

> +rc = hd->platform_ops->map_page(d, gfn, mfn,
> +
> IOMMUF_readable |
> +
> IOMMUF_writable);
> +}
>  if ( rc )
>  {
>  page_list_add(page, &d->page_list);
> --- a/xen/include/asm-x86/hvm/iommu.h
> +++ b/xen/include/asm-x86/hvm/iommu.h
> @@ -46,6 +46,8 @@ struct g2m_ioport {
>  unsigned int np;
>  };
> 
> +#define DEFAULT_DOMAIN_ADDRESS_WIDTH 48
> +
>  struct arch_hvm_iommu
>  {
>  u64 pgd_maddr; /* io page directory machine
> address */
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> @@ -464,8 +464,6 @@
>  #define IOMMU_CONTROL_DISABLED   0
>  #define IOMMU_CONTROL_ENABLED1
> 
> -#define DEFAULT_DOMAIN_ADDRESS_WIDTH48
> -
>  /* interrupt remapping table */
>  #define INT_REMAP_ENTRY_REMAPEN_MASK0x0001
>  #define INT_REMAP_ENTRY_REMAPEN_SHIFT   0
>

Best regards,
Yang



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 0/3] Misc patches to aid migration v2 Remus support

2015-05-10 Thread Hongyang Yang



On 05/08/2015 09:15 PM, Andrew Cooper wrote:

On 08/05/15 13:54, Andrew Cooper wrote:

See individual patches for details.


Git tree available at:
  git://xenbits.xen.org/people/andrewcoop/xen.git remus-migv2-v1

Yang: Please feel free to consume some of all of these patches as
appropriate into your series, rather than both of us attempting to
maintain different versions of the same series to support remus with
migration v2.


Sure, thank you for the effort on Remus support :)



~Andrew



Andrew Cooper (3):
   [RFC] x86/hvm: Permit HVM_PARAM_IDENT_PT to be set more than once
   tools/libxc: Properly quote macro parameters
   libxc/migrationv2: Split {start,end}_of_stream() to make checkpoint
 variants

  tools/libxc/include/xenctrl.h|8 +++
  tools/libxc/xc_sr_common.h   |   25 +++--
  tools/libxc/xc_sr_save.c |8 +++
  tools/libxc/xc_sr_save_x86_hvm.c |   46 --
  tools/libxc/xc_sr_save_x86_pv.c  |   46 +-
  xen/arch/x86/hvm/hvm.c   |   12 +-
  6 files changed, 95 insertions(+), 50 deletions(-)



.



--
Thanks,
Yang.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/3] libxc/migrationv2: Split {start, end}_of_stream() to make checkpoint variants

2015-05-10 Thread Hongyang Yang



On 05/08/2015 09:55 PM, Andrew Cooper wrote:

On 08/05/15 14:50, Ian Campbell wrote:

On Fri, 2015-05-08 at 14:37 +0100, Andrew Cooper wrote:

Does Remus currently function if the sending toolstack suddenly
disappears out of the mix?

I would assume so, that's its entire purpose...


But the signal to resume the domain properly involves setting
last_checkpoint and wandering the recieve loop once more?


Please see my previous explain about Remus checkpoint and failover
Message-ID: <5550147b.1010...@cn.fujitsu.com>

If the sending toolstack suddenly disappears out of the mix, secondary
will recover from the last processed records which include the full
toolstack record.



/me goes and checks the code more carefully.

~Andrew
.



--
Thanks,
Yang.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [XenRT] Cache-Aware Real-Time Xen: Partition shared cache for guest domains in Xen via page coloring

2015-05-10 Thread Meng Xu
Hi Dario and George,

I'm working on considering the shared-cache interference effect into
the schedulers (both in VMM and in VM) to improve the schedulability
of the whole system. To be specific, I'm doing the following things:
(1) Investigating the shared-cache interference on the real-time
performance (such as worst-case execution time) of applications in VMs
 that shared the last level cache (LLC) on Xen;
(2) Eliminate such shared-cache interference by statically
partitioning shared cache to VMs via page-coloring mechanism and\
evaluate the effectiveness of this mechanism;
(3) Better utilize the shared cache by dynamically
increasing/decreasing/changing the cache partitions of a VM online;
(4) Incorporating the cache effect with the scheduling algorithm of
VMM to improve the schedulability of whole system.

Right now, I almost finish the first two steps and have some
preliminary results of the real-time performance of Xen with static
cache partition mechanism. I made a quick slide to summarize the
current work and the future plan.
The slide can be found at:
http://www.cis.upenn.edu/~mengxu/cart-xen/2015-05-01-CARTXen-WiP.pdf

My question is:
Do you have any comment or concerns on the current software-based
cache management work?
I  hope to listen to your opinions and  incorporate your opinions on
my ongoing work instead of diverting too
far away from Xen mainstream ideas. :-)

Thank you very much!

Best regards,

Meng

---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/3] libxc/migrationv2: Split {start, end}_of_stream() to make checkpoint variants

2015-05-10 Thread Hongyang Yang

On 05/08/2015 09:30 PM, Ian Campbell wrote:

On Fri, 2015-05-08 at 13:54 +0100, Andrew Cooper wrote:

This is in preparation for supporting checkpointed streams in migration v2.
  - For PV guests, the VCPU context is moved to end_of_checkpoint().
  - For HVM guests, the HVM context and params are moved to end_of_checkpoint().


[...]

+/**
+ * Send records which need to be at the end of the checkpoint.  This is
+ * called once, or once per checkpoint in a checkpointed stream, and is
+ * after the memory data.
+ */
+int (*end_of_checkpoint)(struct xc_sr_context *ctx);
+
+/**
+ * Send records which need to be at the end of the stream.  This is called
+ * once, before the END record is written.
   */
  int (*end_of_stream)(struct xc_sr_context *ctx);

[...]

+static int x86_hvm_end_of_stream(struct xc_sr_context *ctx)
+{
+int rc;
+
+rc = write_tsc_info(ctx);
  if ( rc )
  return rc;

-/* Write HVM_PARAMS record contains applicable HVM params. */
-rc = write_hvm_params(ctx);
+#ifdef XG_LIBXL_HVM_COMPAT
+rc = write_toolstack(ctx);


I'm not sure about this end_of_stream thing. In a check pointing for
fault tolerance scenario (Remus or COLO) then failover happens when the
sender has died for some reason, and therefore won't get the chance to
send any end of stream stuff.

IOW I think everything in end_of_stream actually needs to be in
end_of_checkpoint unless it is just for informational purposes in a
regular migration or something (which write_toolstack surely isn't)


Yes, all records should be sent at every checkpoint, except those
only need to be sent once.

checkpoint:
You can see clearly from the patches a Remus migration explicit include
two stage, first stage is live migration, the second is Checkpointed
stream. The live migration is obvious, after the live migration, both
primary and secondary are in the same state, the primary will continue
to run until the next checkpoint, at checkpint, we sync the secondary
state with the primary, so that both side are in the same state, so
any record that could be changed while Guest is runing should be sent
at checkpoint.

failover:
The handling of Checkpointed stream on restore side is also include two stage,
first is buffer records, second is process records. This is because if master
died when sending records, the secondary state will be inconsistent. So we
have to make sure all records are received and then process the records.
If master died, the secondary can recover from the last checkpoint state.
Currently Remus failover relies on the migration channel. If the channel
break, we presume master is dead, so we will failover. The "goto err_buf" is
the failover path, with goto err_buf, we discard the current checkpoint
records because it is imperfect, then resume the guest with last checkpoint
state(the last processed records).



Ian.

.



--
Thanks,
Yang.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [edk2] Windows 8 64bit Guest BSOD when using OVMF to boot and *install* from CDROM(when configured with more than 4G memory)

2015-05-10 Thread lidonglin
Hi all:
Did you encounter this BSOD problem ? I try 
git://xenbits.xen.org/ovmf.git and other ovmf repos. I found installing win8 
64bit os (memory > 4G) with OVMF on xen hypervisor always failed with BSOD. I 
also tried to do some thing on kvm, there is no this problem. 
I don't what's the difference.



> -Original Message-
> From: Ian Campbell [mailto:ian.campb...@citrix.com]
> Sent: 2015年5月8日 20:05
> To: Wei Liu
> Cc: Laszlo Ersek; Maoming; wangxin (U); edk2-de...@lists.sourceforge.net;
> lidonglin; xen-devel@lists.xen.org; Fanhenglong; Herongguang (Stephen)
> Subject: Re: [Xen-devel] [edk2] Windows 8 64bit Guest BSOD when using OVMF
> to boot and *install* from CDROM(when configured with more than 4G
> memory)
> 
> On Fri, 2015-05-08 at 13:00 +0100, Wei Liu wrote:
> > On Fri, May 08, 2015 at 10:39:43AM +0100, Ian Campbell wrote:
> > > On Fri, 2015-05-08 at 07:31 +0100, Wei Liu wrote:
> > > > On Thu, May 07, 2015 at 10:29:55AM +0100, Ian Campbell wrote:
> > > > > On Thu, 2015-05-07 at 09:02 +0200, Laszlo Ersek wrote:
> > > > > > (Plus, you are cloning a git repo that may or may not be
> > > > > > identical to the (semi-)official edk2 git repo.)
> > > > >
> > > > > FWIW git://xenbits.xen.org/ovmf.git is a direct descendant of the
> > > >
> > > > The most up to date tested branch is
> > > >
> > > >  git://xenbits.xen.org/osstest/ovmf.git xen-tested-master
> > >
> > > Which branch is the default if you just clone the repo?
> > >
> >
> > The xen-tested-master is the only branch in that tree, so checking out
> > that repo without specifying a branch will leave you in detached head
> > state.
> 
> OK, that's better than some random untested version from whenever the
> tree was created at least!
> 
> Ian
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 00/14] enable Cache Allocation Technology (CAT) for VMs

2015-05-10 Thread Chao Peng
On Fri, May 08, 2015 at 03:11:27PM +0100, Ian Campbell wrote:
> On Fri, 2015-05-08 at 14:48 +0100, Jan Beulich wrote:
> > >>> On 08.05.15 at 15:41,  wrote:
> > > On Fri, 2015-05-08 at 16:56 +0800, Chao Peng wrote:
> > >> Changes in v7:
> > > 
> > > I've now acked the last of the tools stuff.
> > > 
> > > Jan, will you pick that up along with the remaining hypervisor stuff
> > > whenever it is ready please?
> > 
> > Sure, I'll try to remember.
> 
> Thanks.
> 
> I'm sure Chao will prod me if you don't ;-)

I will absolutely :)
Thanks for your help made on this.

Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 02/14] x86: improve psr scheduling code

2015-05-10 Thread Chao Peng
On Fri, May 08, 2015 at 11:20:45AM +0100, Jan Beulich wrote:
> >>> On 08.05.15 at 10:56,  wrote:
> > Switching RMID from previous vcpu to next vcpu only needs to write
> > MSR_IA32_PSR_ASSOC once. Write it with the value of next vcpu is enough,
> > no need to write '0' first. Idle domain has RMID set to 0 and because MSR
> > is already updated lazily, so just switch it as it does.
> > 
> > Also move the initialization of per-CPU variable which used for lazy
> > update from context switch to CPU starting.
> > 
> > Signed-off-by: Chao Peng 
> > Reviewed-by: Andrew Cooper 
> > Reviewed-by: Dario Faggioli 
> 
> Please avoid sending again changes that got applied already.
> 
Just noticed it's already get merged. Sorry for the noise and thanks.
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH Remus v2 04/10] tools/libxc: introduce DECLARE_HYPERCALL_BUFFER_USER_POINTER

2015-05-10 Thread Hongyang Yang



On 05/08/2015 06:16 PM, Andrew Cooper wrote:

On 08/05/15 10:33, Yang Hongyang wrote:

Define a user pointer that can access the hypercall buffer data
Useful when you only need to access the hypercall buffer data

Signed-off-by: Yang Hongyang 
---
  tools/libxc/include/xenctrl.h | 8 
  1 file changed, 8 insertions(+)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 6994c51..12e8c36 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -296,6 +296,14 @@ typedef struct xc_hypercall_buffer xc_hypercall_buffer_t;
  }

  /*
+ * Define a user pointer that can access the hypercall buffer data
+ *
+ * Useful when you only need to access the hypercall buffer data
+ */
+#define DECLARE_HYPERCALL_BUFFER_USER_POINTER(_type, _name, _hbuf)  \
+_type *_name = _hbuf->hbuf;


There are some bracketing issues here.  Please refer to my fixup patch
which I will post as soon as I can.


I've seen those patches, will review them, thank you.



~Andrew


+
+/*
   * Declare the necessary data structure to allow a hypercall buffer
   * passed as an argument to a function to be used in the normal way.
   */


.



--
Thanks,
Yang.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH Remus v2 03/10] tools/libxc: rename send_some_pages to send_dirty_pages

2015-05-10 Thread Hongyang Yang



On 05/08/2015 06:11 PM, Andrew Cooper wrote:

On 08/05/15 10:33, Yang Hongyang wrote:

rename send_some_pages to send_dirty_pages, no functional change.

Signed-off-by: Yang Hongyang 
---
  tools/libxc/xc_sr_save.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index 2394bc4..9ca4336 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -379,7 +379,7 @@ static int send_all_pages(struct xc_sr_context *ctx)
   *
   * Bitmap is bounded by p2m_size.
   */
-static int send_some_pages(struct xc_sr_context *ctx,
+static int send_dirty_pages(struct xc_sr_context *ctx,
 unsigned long *bitmap,
 unsigned long entries)


Please change the parameter alignment, so the end code is properly aligned.

Also, you can drop the *bitmap parameter, as it will always be the
bitmap found in ctx.  This will simplify some of your later patches.


ok, thank you!



~Andrew


  {
@@ -515,7 +515,7 @@ static int send_domain_memory_live(struct xc_sr_context 
*ctx)
  if ( rc )
  goto out;

-rc = send_some_pages(ctx, dirty_bitmap, stats.dirty_count);
+rc = send_dirty_pages(ctx, dirty_bitmap, stats.dirty_count);
  if ( rc )
  goto out;
  }
@@ -540,7 +540,7 @@ static int send_domain_memory_live(struct xc_sr_context 
*ctx)

  bitmap_or(dirty_bitmap, ctx->save.deferred_pages, ctx->save.p2m_size);

-rc = send_some_pages(ctx, dirty_bitmap,
+rc = send_dirty_pages(ctx, dirty_bitmap,
   stats.dirty_count + ctx->save.nr_deferred_pages);
  if ( rc )
  goto out;


.



--
Thanks,
Yang.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH Remus v2 02/10] tools/libxc: introduce setup() and cleanup() on save

2015-05-10 Thread Hongyang Yang



On 05/08/2015 06:08 PM, Andrew Cooper wrote:

On 08/05/15 10:59, Hongyang Yang wrote:




In general, a good change, but some comments...


---
   tools/libxc/xc_sr_save.c | 72
+---
   1 file changed, 44 insertions(+), 28 deletions(-)

diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index cc3e6b1..2394bc4 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -607,13 +607,10 @@ static int send_domain_memory_nonlive(struct
xc_sr_context *ctx)
   return rc;
   }

-/*
- * Save a domain.
- */
-static int save(struct xc_sr_context *ctx, uint16_t guest_type)
+static int setup(struct xc_sr_context *ctx)
   {
   xc_interface *xch = ctx->xch;
-int rc, saved_rc = 0, saved_errno = 0;
+int rc;
   DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
   (&ctx->save.dirty_bitmap_hbuf));

@@ -632,13 +629,51 @@ static int save(struct xc_sr_context *ctx,
uint16_t guest_type)
   goto err;
   }

-IPRINTF("Saving domain %d, type %s",
-ctx->domid, dhdr_type_to_str(guest_type));
-
   rc = ctx->save.ops.setup(ctx);
   if ( rc )
   goto err;

+rc = 0;
+
+ err:
+return rc;
+
+}
+
+static void cleanup(struct xc_sr_context *ctx)
+{
+xc_interface *xch = ctx->xch;
+DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
+(&ctx->save.dirty_bitmap_hbuf));
+
+xc_shadow_control(xch, ctx->domid, XEN_DOMCTL_SHADOW_OP_OFF,
+  NULL, 0, NULL, 0, NULL);
+
+if ( ctx->save.ops.cleanup(ctx) )
+PERROR("Failed to clean up");
+
+if ( dirty_bitmap )
+xc_hypercall_buffer_free_pages(xch, dirty_bitmap,
+
NRPAGES(bitmap_size(ctx->save.p2m_size)));


xc_hypercall_buffer_free_pages() if fine dealing with NULL, just like
free() is.  You can drop the conditional.


Actually this is another trick that I need to deal with those
hypercall macros.
DECLARE_HYPERCALL_BUFFER_SHADOW will define a user pointer "dirty_bitmap"
and a shadow buffer, although xc_hypercall_buffer_free_pages takes
"dirty_bitmap" as an augument, but it is also a MACRO, without
"if ( dirty_bitmap )", the compiler will report "dirty_bitmap" unused
error...


Ah, in which case you would be better using
xc__hypercall_buffer_free_pages() and not creating the local shadow in
the first place.


I thought we'd better use those MACROs which described in the comments...
If it is OK to use xc__hypercall_buffer_free_pages(), I will fix it in
the next version.



~Andrew
.



--
Thanks,
Yang.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 54832: tolerable FAIL - PUSHED

2015-05-10 Thread osstest service user
flight 54832 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/54832/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 9 windows-install fail in 53914 pass in 
54832
 test-amd64-amd64-xl-multivcpu  6 xen-boot   fail pass in 53914
 test-amd64-amd64-xl-qemuu-ovmf-amd64 18 guest-start/debianhvm.repeat fail pass 
in 53914

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64 13 guest-localmigrate  fail like 53811
 test-amd64-i386-freebsd10-i386 13 guest-localmigrate   fail like 53811
 test-armhf-armhf-libvirt 11 guest-start  fail   like 53811

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-xsm  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-xl-xsm   11 guest-start  fail   never pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-libvirt-xsm 11 guest-start  fail   never pass
 test-armhf-armhf-xl-xsm   6 xen-boot fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-sedf-pin 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3 16 guest-stopfail never pass
 test-armhf-armhf-xl-sedf 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 16 guest-stop   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass

version targeted for testing:
 qemuuf8340b360b9bc29d48716ba8aca79df2b9544979
baseline version:
 qemuu38003aee196a96edccd4d64471beb1b67e9b2b17


People who touched revisions under test:
  Edgar E. Iglesias 
  Juan Quintela 
  Liang Li 
  Michael Chapman 
  Peter Maydell 
  Yang Zhang 


jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmfail
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm fail
 test-amd64-amd64-libvirt-xsm fail
 test-armhf-armhf-libvirt-xsm fail
 test-amd64-i386-libvirt-xsm  fail
 test-amd64-amd64-xl-xsm  fail
 test-armhf-armhf-xl-xsm  fail
 test-amd64-i386-xl-xsm   fail
 test-amd64-amd64-xl-pvh-amd  fail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64   

Re: [Xen-devel] [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler

2015-05-10 Thread Chong Li
On Fri, May 8, 2015 at 2:49 AM, Jan Beulich  wrote:

> >>> On 07.05.15 at 19:05,  wrote:
> > --- a/xen/common/sched_rt.c
> > +++ b/xen/common/sched_rt.c
> > @@ -1085,6 +1085,9 @@ rt_dom_cntl(
> >  struct list_head *iter;
> >  unsigned long flags;
> >  int rc = 0;
> > +xen_domctl_sched_rtds_params_t *local_sched;
> > +int vcpu_index=0;
> > +int i;
>
> unsigned int
>
> > @@ -1110,6 +1113,67 @@ rt_dom_cntl(
> >  }
> >  spin_unlock_irqrestore(&prv->lock, flags);
> >  break;
> > +case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> > +op->u.rtds.nr_vcpus = 0;
> > +spin_lock_irqsave(&prv->lock, flags);
> > +list_for_each( iter, &sdom->vcpu )
> > +vcpu_index++;
> > +spin_unlock_irqrestore(&prv->lock, flags);
> > +op->u.rtds.nr_vcpus = vcpu_index;
>
> Does dropping of the lock here and re-acquiring it below really work
> race free?
>

Here, the lock is used in the same way as the ones in the two cases
above (XEN_DOMCTL_SCHEDOP_get/putinfo). So I think if race free
is guaranteed in that two cases, the lock in this case works race free
as well.


> > +local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> > +vcpu_index);
> > +if( local_sched == NULL )
> > +{
> > +return -ENOMEM;
> > +}
>
> Pointless braces.
>
> > +vcpu_index = 0;
> > +spin_lock_irqsave(&prv->lock, flags);
> > +list_for_each( iter, &sdom->vcpu )
> > +{
> > +struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
> sdom_elem);
> > +
> > +local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
> > +local_sched[vcpu_index].period = svc->period / MICROSECS(1);
> > +local_sched[vcpu_index].index = vcpu_index;
>
> What use is this index to the caller? I think you rather want to tell it
> the vCPU number. That's especially also taking the use case of a
> get/set pair into account - unless you tell me that these indexes can
> never change, the indexes passed back into the set operation would
> risk to have become stale by the time the hypervisor processes the
> request.
>

I don't quite understand what the "stale" means. The array here
(local_sched[ ])
and the array (in libxc) that local_sched[ ] is copied to are both used for
this get
operation only. When users set per-vcpu parameters, there are also
dedicated
arrays for that set operation.


>
> > +vcpu_index++;
> > +}
> > +spin_unlock_irqrestore(&prv->lock, flags);
> > +copy_to_guest(op->u.rtds.vcpus, local_sched, vcpu_index);
> > +xfree(local_sched);
> > +rc = 0;
> > +break;
> > +case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> > +local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> > +op->u.rtds.nr_vcpus);
>
> While above using xzalloc_array() is warranted for security reasons,
> I don't see why you wouldn't be able to use xmalloc_array() here.
>
> > +if( local_sched == NULL )
> > +{
> > +return -ENOMEM;
> > +}
> > +copy_from_guest(local_sched, op->u.rtds.vcpus,
> op->u.rtds.nr_vcpus);
> > +
> > +for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> > +{
> > +vcpu_index = 0;
> > +spin_lock_irqsave(&prv->lock, flags);
> > +list_for_each( iter, &sdom->vcpu )
> > +{
> > +struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
> sdom_elem);
> > +if ( local_sched[i].index == vcpu_index )
> > +{
> > +if ( local_sched[i].period <= 0 ||
> local_sched[i].budget <= 0 )
> > + return -EINVAL;
> > +
> > +svc->period = MICROSECS(local_sched[i].period);
> > +svc->budget = MICROSECS(local_sched[i].budget);
> > +break;
> > +}
> > +vcpu_index++;
> > +}
> > +spin_unlock_irqrestore(&prv->lock, flags);
> > +}
>
> Considering a maximum size guest, these two nested loops could
> require a couple of million iterations. That's too much without any
> preemption checks in the middle.
>

The section protected by the lock is only the "list_for_each" loop, whose
running time is limited by the number of vcpus of a domain (32 at most).
If this does cause problems, I think adding a "hypercall_preempt_check()"
at the outside "for" loop may help. Is that right?


> > --- a/xen/common/schedule.c
> > +++ b/xen/common/schedule.c
> > @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d, struct
> xen_domctl_scheduler_op *op)
> >
> >  if ( (op->sched_id != DOM2OP(d)->sched_id) ||
> >   ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
> > -  (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
> > +  (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
> > +  (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinf

[Xen-devel] [xen-unstable test] 54309: regressions - trouble: blocked/broken/fail/pass

2015-05-10 Thread osstest service user
flight 54309 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/54309/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64 13 guest-localmigrate fail REGR. vs. 50405
 build-amd64-xsm   5 xen-build fail REGR. vs. 50405

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-sedf-pin  3 host-install(3) broken REGR. vs. 50405
 test-amd64-i386-freebsd10-i386 13 guest-localmigrate   fail like 50405
 test-armhf-armhf-libvirt 11 guest-start  fail   like 50405

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-xl-xsm   6 xen-boot fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-sedf 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3 16 guest-stopfail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3 16 guest-stopfail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass

version targeted for testing:
 xen  a57b1fff48a8e7f3791156f9cfafc174b9f18e2b
baseline version:
 xen  123c7793797502b222300eb710cd3873dcca41ee


People who touched revisions under test:
  Andrew Cooper 
  Boris Ostrovsky 
  Chao Peng 
  Chen Baozi 
  Christoffer Dall 
  Daniel De Graaf 
  Dario Faggioli 
  David Vrabel 
  Don Slutz 
  Edgar E. Iglesias 
  Emil Condrea 
  Eugene Korenevsky 
  Fabio Fantoni 
  George Dunlap 
  Giuseppe Mazzotta 
  Gustau Perez 
  Ian Campbell 
  Ian Campbell 
  Ian Jackson 
  Jan Beulich 
  Jim Fehlig 
  Julien Grall 
  Julien Grall 
  Kai Huang 
  Kevin Tian 
  Konrad Rzeszutek Wilk 
  Liang Li 
  Linda Jacobson 
  Nathan Studer 
  Olaf Hering 
  Paul Durrant 
  Pranavkumar Sawargaonkar 
  Rafał Wojdyła 
  Robert VanVossen 
  Roger Pau Monné 
  Ross Lagerwall 
  Stefano Stabellini 
  Stefano Stabellini 
  Suravee Suthikulpanit 
  Tamas K Lengyel 
  Tamas K Lengyel 
  Tiejun Chen 
  Tim Deegan 
  Vitaly Kuznetsov 
  Wei Liu 
  Zhou Peng 


jobs:
 build-amd64-xsm  fail
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-oldkern  pass
 build-i386-oldkern   pass
 build-amd64-pvops  

[Xen-devel] [linux-linus test] 54095: regressions - FAIL

2015-05-10 Thread osstest service user
flight 54095 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/54095/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rumpuserxen-i386 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail REGR. vs. 50329
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail REGR. vs. 50329
 test-armhf-armhf-xl-arndale   6 xen-boot  fail REGR. vs. 50329
 test-armhf-armhf-xl   6 xen-boot  fail REGR. vs. 50329

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-libvirt  6 xen-boot  fail REGR. vs. 50329
 test-armhf-armhf-xl-sedf  6 xen-boot  fail REGR. vs. 50329
 test-armhf-armhf-xl-sedf-pin  6 xen-boot  fail REGR. vs. 50329
 test-amd64-i386-freebsd10-i386  9 freebsd-install  fail like 50329
 test-amd64-i386-freebsd10-amd64  9 freebsd-install fail like 50329

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-i386-libvirt-xsm  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-xl-xsm  11 guest-start  fail   never pass
 test-amd64-i386-xl-xsm   11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 11 guest-start  fail   never pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-i386-xl-qemut-winxpsp3 16 guest-stopfail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-xsm   6 xen-boot fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3 16 guest-stopfail never pass

version targeted for testing:
 linuxaf6472881a6127ad075adf64e459d2905fbc8a5c
baseline version:
 linux1cced5015b171415169d938fb179c44fe060dc15


1956 people touched revisions under test,
not listing them all


jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  fail
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsmfail
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm fail
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmfail
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 

Re: [Xen-devel] pvgrub regression in xen.git 123c77937975

2015-05-10 Thread Ian Campbell
On Sun, 2015-05-10 at 13:20 +0100, Wei Liu wrote:
> On Sun, May 10, 2015 at 07:19:09AM +0100, Ian Campbell wrote:
> > On Fri, 2015-05-08 at 17:16 +0100, Ian Campbell wrote:
> > > There seems to be a pvgrub regression somewhere in the range
> > > 3a28f760508f..123c77937975
> > 
> > The bisector has fingered:
> > 
> > commit 840837907c6186307c19abbec926852ba448facd
> >   Author: Wei Liu 
> >   Date:   Mon Mar 16 09:52:22 2015 +
> >   
> >   libxc: add p2m_size to xc_dom_image
> > 
> > Full report below. I've put a copy of the graph at:
> > http://xenbits.xen.org/people/ianc/tmp/201505/bisect.pvgrub.html
> > 
> > Wei, you can see an instance of the failure at
> > http://osstest.xs.citrite.net/~osstest/testlogs/logs/37344/
> 
> Unfortunately Citrix access gateway doesn't work for me at the moment so
> I couldn't see the log.

FWIW http://xenbits.xen.org/people/ianc/tmp/201505/37344/ (no build-*
logs, just test-*), but it looks like you don't actually need them.

> But I've looked at the code. PV grub's kexec function doesn't update the
> newly added field. I will try to reproduce on my local test box.

I've seen your patch, will take a proper look tomorrow.

Thanks for the quick turnaround!

Ian.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH] pvgrub: initialise p2m_size

2015-05-10 Thread Wei Liu
In 84083790 ("libxc: add p2m_size to xc_dom_image") a new field is
added. We should initialised this field in pvgrub as well, otherwise
xc_dom_build_image won't work properly.

Signed-off-by: Wei Liu 
Cc: Ian Campbell 
Cc: Ian Jackson 
---
 stubdom/grub/kexec.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c
index dc8db81..4c33b25 100644
--- a/stubdom/grub/kexec.c
+++ b/stubdom/grub/kexec.c
@@ -276,12 +276,13 @@ void kexec(void *kernel, long kernel_size, void *module, 
long module_size, char
 dom->total_pages = start_info.nr_pages;
 
 /* equivalent of arch_setup_meminit */
+dom->p2m_size = dom->total_pages;
 
 /* setup initial p2m */
-dom->p2m_host = malloc(sizeof(*dom->p2m_host) * dom->total_pages);
+dom->p2m_host = malloc(sizeof(*dom->p2m_host) * dom->p2m_size);
 
 /* Start with our current P2M */
-for (i = 0; i < dom->total_pages; i++)
+for (i = 0; i < dom->p2m_size; i++)
 dom->p2m_host[i] = pfn_to_mfn(i);
 
 if ( (rc = xc_dom_build_image(dom)) != 0 ) {
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-3.4 test] 53959: regressions - FAIL

2015-05-10 Thread osstest service user
flight 53959 linux-3.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/53959/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair15 debian-install/dst_host fail REGR. vs. 32769-bisect
 test-amd64-amd64-xl   9 debian-install fail REGR. vs. 52209-bisect
 test-amd64-amd64-pair   15 debian-install/dst_host fail REGR. vs. 52715-bisect

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-libvirt  9 debian-install fail REGR. vs. 32428-bisect
 test-amd64-amd64-xl-multivcpu  6 xen-boot fail blocked in 53725-bisect
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 6 xen-boot fail blocked in 
53725-bisect
 test-amd64-amd64-xl-qemut-winxpsp3  6 xen-bootfail blocked in 53725-bisect
 test-amd64-amd64-libvirt-xsm  6 xen-boot  fail blocked in 53725-bisect
 test-amd64-i386-xl-qemuu-winxpsp3  6 xen-boot fail blocked in 53725-bisect
 test-amd64-amd64-xl-xsm   6 xen-boot  fail blocked in 53725-bisect
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-boot  fail blocked in 53725-bisect
 test-amd64-i386-freebsd10-amd64  6 xen-boot   fail blocked in 53725-bisect
 test-amd64-i386-xl-qemuu-debianhvm-amd64 6 xen-boot fail blocked in 
53725-bisect
 test-amd64-i386-libvirt-xsm   6 xen-boot  fail blocked in 53725-bisect
 test-amd64-amd64-xl-sedf  9 debian-installfail blocked in 53725-bisect
 test-amd64-i386-libvirt   9 debian-installfail blocked in 53725-bisect
 test-amd64-i386-freebsd10-i386 16 guest-localmigrate/x10 fail blocked in 
53725-bisect
 test-amd64-amd64-xl-sedf-pin  6 xen-boot  fail blocked in 53725-bisect
 test-amd64-i386-xl-qemut-debianhvm-amd64 6 xen-boot fail blocked in 
53725-bisect
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 6 xen-boot fail blocked in 
53725-bisect
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-boot  fail blocked in 53725-bisect
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 6 xen-boot fail blocked in 
53725-bisect
 test-amd64-i386-rumpuserxen-i386  6 xen-boot  fail blocked in 53725-bisect
 test-amd64-i386-xl-qemuu-win7-amd64 9 windows-install fail blocked in 
53725-bisect
 test-amd64-amd64-xl-qemuu-ovmf-amd64  6 xen-bootfail like 53709-bisect
 test-amd64-i386-xl6 xen-bootfail like 53725-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-xl-credit2   9 debian-install   fail   never pass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-xl-pvh-intel  9 debian-install   fail  never pass
 test-amd64-i386-xl-xsm9 debian-install   fail   never pass
 test-amd64-amd64-xl-pvh-amd   9 debian-install   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-winxpsp3 16 guest-stopfail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 16 guest-stop fail never pass

version targeted for testing:
 linux56b48fcda5076d4070ab00df32ff5ff834e0be86
baseline version:
 linuxbb4a05a0400ed6d2f1e13d1f82f289ff74300a70


370 people touched revisions under test,
not listing them all


jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  fail
 test-amd64-i386-xl   fail
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsmfail
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm fail
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmfail
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm fail
 test-amd64-amd64-lib

Re: [Xen-devel] pvgrub regression in xen.git 123c77937975

2015-05-10 Thread Wei Liu
On Sun, May 10, 2015 at 07:19:09AM +0100, Ian Campbell wrote:
> On Fri, 2015-05-08 at 17:16 +0100, Ian Campbell wrote:
> > There seems to be a pvgrub regression somewhere in the range
> > 3a28f760508f..123c77937975
> 
> The bisector has fingered:
> 
> commit 840837907c6186307c19abbec926852ba448facd
>   Author: Wei Liu 
>   Date:   Mon Mar 16 09:52:22 2015 +
>   
>   libxc: add p2m_size to xc_dom_image
> 
> Full report below. I've put a copy of the graph at:
> http://xenbits.xen.org/people/ianc/tmp/201505/bisect.pvgrub.html
> 
> Wei, you can see an instance of the failure at
> http://osstest.xs.citrite.net/~osstest/testlogs/logs/37344/

Unfortunately Citrix access gateway doesn't work for me at the moment so
I couldn't see the log.

But I've looked at the code. PV grub's kexec function doesn't update the
newly added field. I will try to reproduce on my local test box.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 53940: regressions - FAIL

2015-05-10 Thread osstest service user
flight 53940 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/53940/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386 13 guest-localmigrate  fail REGR. vs. 52776

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64 15 guest-localmigrate.2fail like 52776

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-i386-xl-xsm   11 guest-start  fail   never pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-amd64-libvirt-xsm 11 guest-start  fail   never pass
 test-amd64-amd64-xl-xsm  11 guest-start  fail   never pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail never 
pass
 test-amd64-i386-libvirt-xsm  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 16 guest-stop   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3 16 guest-stopfail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 16 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3 16 guest-stopfail never pass

version targeted for testing:
 ovmf feca17fa4bf15b5994d520969a6756fff8f809fb
baseline version:
 ovmf 6e746540c33bb6a1d9affba24f6acb51e9122f7e


People who touched revisions under test:
  "Ma, Maurice" 
  "Mudusuru, Giri P" 
  "Yao, Jiewen" 
  Ard Biesheuvel 
  Chao Zhang 
  Eric Dong 
  Feng Tian 
  Fu Siyuan 
  Hao Wu 
  Jeff Fan 
  jiaxinwu 
  Laszlo Ersek 
  Liming Gao 
  Ma, Maurice 
  Michael Kinney 
  Mudusuru, Giri P 
  Olivier Martin 
  Qiu Shumin 
  Ronald Cron 
  Ruiyu Ni 
  Shifei Lu 
  Tim He 
  Yao, Jiewen 
  Ye Ting 


jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsmfail
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm fail
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmfail
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm fail
 test-amd64-amd64-libvirt-xsm fail
 test-amd64-i386-libvirt-xsm  fail
 test-amd64-amd64-xl-xsm  fail
 test-amd64-i386-xl-xsm   fail
 test-amd64-amd64-xl-pvh-amd  fail
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  fail
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass
 test-amd64-amd64-xl-qemut-win7-amd64 fail