ill finally make it work as expected!
Aha! Seems to work now! Thanks for all the useful feedback and quick responses!
Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
t set
>> # CONFIG_MAXSMP is not set
>> CONFIG_PM_SLEEP_SMP=y
>>
>> See anything problematic? Seems PV spinlocks is not set, and SMP is
>> enabled... or is something else required to prevent stripping the
>> spinlocks? Also not sure if any of the set SP
so if its done at compile time this could be
the issue. I doubt its done at boot but if so I would presume there is
a way to disable this?
Below is the config file grepped for "SMP".
CONFIG_X86_64_SMP=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_SMP=y
# CONFIG_X86_VSMP is not set
# CONFIG_MAXSMP is not set
CONFIG_PM_SLEEP_SMP=y
See anything problematic? Seems PV spinlocks is not set, and SMP is
enabled... or is something else required to prevent stripping the
spinlocks? Also not sure if any of the set SPIN config items could
mess with this. If this is done at boot, a point in the direction for
preventing this would be appreciated!
Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
will
>> >> surprise me, though), that will be great. Otherwise, we probably have
>> >> to expose the spin_lock in Xen to the Linux?
>>
>> > I'd think this has to be via the hypervisor (or some other third party).
>> > Otherwise what happens if one of the guests dies while holding the lock?
>> > -boris
>>
>> This is a valid point against locking in the guests, but itself won't
>> prevent a spinlock implementation from working! We may move this
>> direction for several reasons but I am interested in why the above is
>> not working when I've disabled the PV part that sleeps vcpus.
Regards,
Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
or (or some other third party).
> Otherwise what happens if one of the guests dies while holding the lock?
> -boris
This is a valid point against locking in the guests, but itself won't
prevent a spinlock implementation from working! We may move this
direc
On May 16, 2016 6:25 PM, "Doug Goldstein" <car...@cardoe.com> wrote:
>
> On 5/16/16 3:18 PM, Dagaen Golomb wrote:
> > On Mon, May 16, 2016 at 3:56 PM, Dagaen Golomb <dgol...@seas.upenn.edu>
wrote:
> >> On Mon, May 16, 2016 at 3:52 PM, Dagaen Golomb <dg
On Mon, May 16, 2016 at 3:56 PM, Dagaen Golomb <dgol...@seas.upenn.edu> wrote:
> On Mon, May 16, 2016 at 3:52 PM, Dagaen Golomb <dgol...@seas.upenn.edu> wrote:
>> On Mon, May 16, 2016 at 3:12 PM, Doug Goldstein <car...@cardoe.com> wrote:
>>> On 5/16/16 11:20
On Mon, May 16, 2016 at 3:52 PM, Dagaen Golomb <dgol...@seas.upenn.edu> wrote:
> On Mon, May 16, 2016 at 3:12 PM, Doug Goldstein <car...@cardoe.com> wrote:
>> On 5/16/16 11:20 AM, Dagaen Golomb wrote:
>>> On Mon, May 16, 2016 at 12:11 PM, Dagaen Golomb <d
On Mon, May 16, 2016 at 3:12 PM, Doug Goldstein <car...@cardoe.com> wrote:
> On 5/16/16 11:20 AM, Dagaen Golomb wrote:
>> On Mon, May 16, 2016 at 12:11 PM, Dagaen Golomb <dgol...@seas.upenn.edu>
>> wrote:
>>> On Mon, May 16, 2016 at 12:03 PM, Dagaen Golomb
On Mon, May 16, 2016 at 12:11 PM, Dagaen Golomb <dgol...@seas.upenn.edu> wrote:
> On Mon, May 16, 2016 at 12:03 PM, Dagaen Golomb <dgol...@seas.upenn.edu>
> wrote:
>> On Mon, May 16, 2016 at 11:55 AM, Doug Goldstein <car...@cardoe.com> wrote:
>>>
On Mon, May 16, 2016 at 12:03 PM, Dagaen Golomb <dgol...@seas.upenn.edu> wrote:
> On Mon, May 16, 2016 at 11:55 AM, Doug Goldstein <car...@cardoe.com> wrote:
>> On 5/15/16 8:41 PM, Dagaen Golomb wrote:
>>>> On 5/15/16 8:28 PM, Dagaen Golomb wrote:
>>>&g
On Mon, May 16, 2016 at 11:55 AM, Doug Goldstein <car...@cardoe.com> wrote:
> On 5/15/16 8:41 PM, Dagaen Golomb wrote:
>>> On 5/15/16 8:28 PM, Dagaen Golomb wrote:
>>>>> On 5/15/16 11:40 AM, Dagaen Golomb wrote:
>>>>>> Hi All,
>>
Regards,
Dagaen Golomb
On May 16, 2016 09:13, "Jonathan Creekmore" <jonathan.creekm...@gmail.com>
wrote:
>
>
> Dagaen Golomb writes:
>
> > It does, being the custom kernel on version 4.1.0. But Dom0 uses this
same
> > exact kernel and reads/writes just fine!
mething similar? I'm using the xen/xenstore.h header
> >>> file for all of my xenstore interactions. I'm running Xen 4.7 so it
> >>> should be in /dev/, and the old kernel is before 3.14 but the new one
> >>> is after, but I would presume the standard header
> On 5/15/16 8:28 PM, Dagaen Golomb wrote:
>>> On 5/15/16 11:40 AM, Dagaen Golomb wrote:
>>>> Hi All,
>>>>
>>>> I'm having an interesting issue. I am working on a project that
>>>> requires me to share memory between dom0 and domUs.
> On 5/15/16 11:40 AM, Dagaen Golomb wrote:
> > Hi All,
> >
> > I'm having an interesting issue. I am working on a project that
> > requires me to share memory between dom0 and domUs. I have this
> > successfully working using the grant table and the XenStore to
ibed above. I also have the dom0 running this kernel and
>>> it reads and writes the XenStore just dandy. Are there any kernel
>>> config issues that could do this?
>>
>> What if you use the .config of the kernel in the working domU to
>> compile the kernel in th
ver, I do not see why the kernel modification would be the
>> issue as described above. I also have the dom0 running this kernel and
>> it reads and writes the XenStore just dandy. Are there any kernel
>> config issues that could do this?
>
> What if you use the .config of th
88007B2F3390. This is supposed
> to be the key name a presume (such as gref). Maybe its an issue with
> the compiled binary, and it ends up watching on a key that doesn't
> exist (nor ever will). I will look into this as it looks promising!
> Thanks!
Update: I checked further up the logs and the other working kernels
produce values like this as well. Also, I am using the same exact
binary between kernel versions, not recompiling.
>
> I'm don't see any other logs for xenstore, if there are more I please
> point them to me. xenstored.log in same directory is recognized as
> binary fine, and when I open anyways all I see is "Xen Storage Daemon,
> version 1.0" repeatedly.
Regards,
Dagaen Golomb
Ph.D. Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
will). I will look into this as it looks promising!
Thanks!
I'm don't see any other logs for xenstore, if there are more I please
point them to me. xenstored.log in same directory is recognized as
binary fine, and when I open anyways all I see is "Xen Storage Daemon,
version 1.0" repeatedly.
Regards,
Dagaen Golomb
Ph.D. Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
itself.
I have an inkling this may be something as simple as a configuration
issue, but I can't seem to find anything. Also, the fact that writes
work fine but reads do not is perplexing me.
Any help would be appreciated!
Regards,
Dagaen Golomb
Ph.D. Student, University of Pennsylvania
the __RTDS_scheduled flag.
I also don't feel we need another list.
Regards,
~Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
, and moving the runnings vpus to runq will make
the scheduler almost done as far as functional correctness goes.
Various other features hinted at would be a series on top of this.
Regards,
~Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http
, I was thinking to make right the same comment. :-)
I believe I put that there with the intention of having some
invocations even with an empty queue,
but this may be a bug. I don't think we should need a minimum
invocation period so I may
remove this and the associated constant.
Regards
~Dagaen
down
in the code and check, that, IMO, would be best. :-D
I will research myself. :)
Regards,
~Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
as well. (Yes, it is different here since we can get more useful
information to tickle cpu if we put vCPUs into runq instead of adding
one more queue.) :-)
I think this is straightforward to simply use the runq. I will work on
this implementation.
Regards,
~Dagaen Golomb
. The ordering in this case doesn't seem to cause
any functional/behavior problems, but it will cause rt_schedule to run twice
when it could have been ran once. So, even as a corner case, it would seem
that its a performance corner case and not a behavior one.
~Dagaen Golomb
we wanted to avoid with this
restructuring. Additional logic to enforce a replenishment always goes
first may be more than we would like. I'll have to look more into the Xen
timer behavior with these regards to this matter.
~Dagaen Golomb
___
Xen-devel
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled. The scheduler chooses the lowest time off of this
list and waits until the specified time instead of running every
1 ms as it did before.
Signed-off-by: Dagaen Golomb
if the two timers are armed for the same time. It should be
correct for the common case.
Dario, let me know if this is closer to what you envisioned.
~Dagaen
On Sat, Jun 27, 2015 at 3:46 PM, Dagaen Golomb dgol...@seas.upenn.edu wrote:
To do this, we create a new list that holds, for each
vcpu
, 2015-06-18 at 14:07 -0400, Dagaen Golomb wrote:
Anyway, I've zero interest in turning this into a fight over
terminology... If you want to call runq_tickle() the scheduler, go
ahead, it would just make communication a bit more difficult, but I'm up
for the challenge! :-)
Oh, BTW, while
Yes, this is an option. However, I thought this would actually be an
option you would
not like.
How so... I've been arguing for this the whole time?!?! :-O
I'm sure I've put down a sketch of what I think the replenishment
function should do in my first or second email in the thread, and
Thanks for the reply, budget enforcement in the scheduler timer makes
sense. I think I have an idea of what he wants done now.
~Dagaen
On Jun 17, 2015 1:45 AM, Meng Xu xumengpa...@gmail.com wrote:
Hi Dagaen,
I just comment on the summary of scheduler design you proposed at the
end of the
into how the behavior
could be implemented
correctly and beautifully using the multiple timer approach. I simply
don't see how it can
be done without heavy interaction and information sharing between them
which really
defeats the purpose.
Regards,
~Dagaen
On Sat, Jun 13, 2015 at 4:33 PM, Dagaen
Thanks for this actually... I love discussing these things, it makes me
remind the time when I was doing these stuff myself, and makes me feel
young! :-P
And thank you for the very detailed and well-thought response!
Separating the replenishment from the scheduler may be problematic. The
No HTML, please.
Got it, sorry.
And note that, when I say timer, I mean an actual Xen timer,
i.e.,
those things that are started, stopped, and with a timer
handling
routine being called when they expire. For an example, you can
have a
, making sure each
run a long time to remove biases.
On Mon, 2015-06-08 at 07:46 -0400, Dagaen Golomb wrote:
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled.
Ok. Actually, what I really expected to see
...
== Hypervisor ==
[...]
* Improve RTDS scheduler (none)
Change RTDS from quantum driven to event driven
- Dagaen Golomb, Meng Xu, Chong Li
...
Ok.
The patch for this is out:
http://osdir.com/ml/general/2015-06/msg10265.html
Looking forward to comments.
Regards,
Dagaen Golomb
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled. The scheduler chooses the lowest time off of this
list and waits until the specified time instead of running every
1 ms as it did before.
Signed-off-by: Dagaen Golomb
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled. The scheduler chooses the lowest time off of this
list and waits until the specified time instead of running every
1 ms as it did before.
Signed-off-by: Dagaen Golomb
All,
I expect to have a patch out soon for the RTDS scheduler improvement.
Regards,
Dagaen Golomb
On Thu, Mar 12, 2015 at 12:01 PM, Olaf Hering o...@aepfle.de wrote:
On Thu, Mar 12, Ian Campbell wrote:
dist/install/var/xen/dump
which all seems proper and correct to me.
Except
are received.
This improvement will only require changes to the RTDS scheduler file
(sched_rt.c) and will not require changes to any other Xen subsystems.
Discussion, comments, and suggestions are welcome.
Regards,
Dagaen Golomb
___
Xen-devel mailing
42 matches
Mail list logo