fork

2016-04-17 Thread Nitin Varyani
Linux kernel development by Robert Love describes the fork process as

fork() -> clone() -> do_fork() -> copy_process()

I am unable to find the clone() system call in linux 3.13.
Can someone explain the proper flow of fork() system call initiated by the
user?

Where can I find the libc implementation for fork()? I want the code of all
the functions involved in fork.
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: system call

2016-04-09 Thread Nitin Varyani
I am using Ubuntu

On Sat, Apr 9, 2016 at 8:04 PM, Pranay Srivastava  wrote:

> On Sat, Apr 9, 2016 at 7:51 PM, Nitin Varyani 
> wrote:
> > I have a 64 bit machine
> >
>
> Before changing the source try to build, install and boot your kernel.
> I'm sure there are some extra steps you might need to perform to boot your
> compiled kernel. Which distro are you using? I used OpenSuse for this work.
>
>
> > I am changing linux 3.5.4 source tree.
> >
> > I modified syscall_64.tbl
> > I had put your code in linux_3.5.4/arch/x86/pks_first/pks_first_call.c
> >
> > Then, I created pks_first/Makefile
> >
> > Modified the arch/x86/Kbuild
> >
> > Modified include/linux/syscalls.h
> >
> >
> > I ran "make menuconfig" and then simply exit.
> >
> > Then I ran
> >
> > "make"
> >
> > I saw that pks_first_call.o was created
> >
> > I then ran
> >
> > "make modules_install"
> > "make install"
> >
> > After then I restarted my system and booted linux 3.5.4. But it was stuck
> > half way.
> >
> > On Sat, Apr 9, 2016 at 7:39 PM, Pranay Srivastava 
> wrote:
> >>
> >> Hi Nitin
> >>
> >> On Sat, Apr 9, 2016 at 5:03 PM, Nitin Varyani  >
> >> wrote:
> >> > Neither of the solution is working.
> >> > @ Pranay: kernel is not booting after making the changes you have
> >> > mentioned.
> >> > somethings like
> >> > "dropping to shell
> >> > initramfs:"
> >> > is displayed on booting.
> >>
> >> I don't think this is related to the changes you made. I would advise
> >> you just build the sources for your
> >> distro and try to get to boot the kernel you compiled. Perhaps some
> >> steps you might have missed specific to your
> >> distro?
> >>
> >> >
> >> >
> >> > On Thu, Apr 7, 2016 at 1:08 PM, Pranay Srivastava 
> >> > wrote:
> >> >>
> >> >> Nitin
> >> >>
> >> >>
> >> >> On Thu, Apr 7, 2016 at 11:53 AM, Nitin Varyani
> >> >> 
> >> >> wrote:
> >> >> >
> >> >> > Hi,
> >> >> >   I want to implement a system call as explained in Linux
> kernel
> >> >> > development by Robert Love.
> >> >> >
> >> >> > He does three things
> >> >> >  adding entry to entry.S
> >> >> > adding entry to asm/unistd.h
> >> >> > and adding the system call code to sched.c
> >> >> >
> >> >> >
> >> >> > and then make + make install
> >> >> >
> >> >> > I do not want to implement for all architectures but only for my PC
> >> >> > which is 64 bit. I am not able to locate files entry. S and
> unistd.h
> >> >> > which
> >> >> > he is telling in his tutorial.
> >> >> > Please help me out to figure out the exact steps. Please also
> mention
> >> >> > the linux kernel version I should use.
> >> >> >
> >> >>
> >> >> Please refer this. I wrote this quite a while back but should be good
> >> >> to
> >> >> go.
> >> >>
> >> >>
> >> >>
> >> >>
> http://codewithkernel.blogspot.my/2014/06/adding-new-system-call-in-linux-x86-and.html
> >> >>
> >> >> > Nitin
> >> >> >
> >> >> > ___
> >> >> > Kernelnewbies mailing list
> >> >> > Kernelnewbies@kernelnewbies.org
> >> >> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> ---P.K.S
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> ---P.K.S
> >
> >
>
>
>
> --
> ---P.K.S
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: system call

2016-04-09 Thread Nitin Varyani
I have a 64 bit machine

I am changing linux 3.5.4 source tree.

I modified syscall_64.tbl
I had put your code in linux_3.5.4/arch/x86/pks_first/pks_first_call.c

Then, I created pks_first/Makefile

Modified the arch/x86/Kbuild

Modified include/linux/syscalls.h


I ran "make menuconfig" and then simply exit.

Then I ran

"make"

I saw that pks_first_call.o was created

I then ran

"make modules_install"
"make install"

After then I restarted my system and booted linux 3.5.4. But it was stuck
half way.

On Sat, Apr 9, 2016 at 7:39 PM, Pranay Srivastava  wrote:

> Hi Nitin
>
> On Sat, Apr 9, 2016 at 5:03 PM, Nitin Varyani 
> wrote:
> > Neither of the solution is working.
> > @ Pranay: kernel is not booting after making the changes you have
> mentioned.
> > somethings like
> > "dropping to shell
> > initramfs:"
> > is displayed on booting.
>
> I don't think this is related to the changes you made. I would advise
> you just build the sources for your
> distro and try to get to boot the kernel you compiled. Perhaps some
> steps you might have missed specific to your
> distro?
>
> >
> >
> > On Thu, Apr 7, 2016 at 1:08 PM, Pranay Srivastava 
> wrote:
> >>
> >> Nitin
> >>
> >>
> >> On Thu, Apr 7, 2016 at 11:53 AM, Nitin Varyani <
> varyani.nit...@gmail.com>
> >> wrote:
> >> >
> >> > Hi,
> >> >   I want to implement a system call as explained in Linux kernel
> >> > development by Robert Love.
> >> >
> >> > He does three things
> >> >  adding entry to entry.S
> >> > adding entry to asm/unistd.h
> >> > and adding the system call code to sched.c
> >> >
> >> >
> >> > and then make + make install
> >> >
> >> > I do not want to implement for all architectures but only for my PC
> >> > which is 64 bit. I am not able to locate files entry. S and unistd.h
> which
> >> > he is telling in his tutorial.
> >> > Please help me out to figure out the exact steps. Please also mention
> >> > the linux kernel version I should use.
> >> >
> >>
> >> Please refer this. I wrote this quite a while back but should be good to
> >> go.
> >>
> >>
> >>
> http://codewithkernel.blogspot.my/2014/06/adding-new-system-call-in-linux-x86-and.html
> >>
> >> > Nitin
> >> >
> >> > ___
> >> > Kernelnewbies mailing list
> >> > Kernelnewbies@kernelnewbies.org
> >> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >> >
> >>
> >>
> >>
> >> --
> >> ---P.K.S
> >
> >
>
>
>
> --
> ---P.K.S
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: system call

2016-04-09 Thread Nitin Varyani
Neither of the solution is working.
@ Pranay: kernel is not booting after making the changes you have mentioned.
somethings like
"dropping to shell
initramfs:"
is displayed on booting.


On Thu, Apr 7, 2016 at 1:08 PM, Pranay Srivastava  wrote:

> Nitin
>
>
> On Thu, Apr 7, 2016 at 11:53 AM, Nitin Varyani 
> wrote:
> >
> > Hi,
> >   I want to implement a system call as explained in Linux kernel
> development by Robert Love.
> >
> > He does three things
> >  adding entry to entry.S
> > adding entry to asm/unistd.h
> > and adding the system call code to sched.c
> >
> >
> > and then make + make install
> >
> > I do not want to implement for all architectures but only for my PC
> which is 64 bit. I am not able to locate files entry. S and unistd.h which
> he is telling in his tutorial.
> > Please help me out to figure out the exact steps. Please also mention
> the linux kernel version I should use.
> >
>
> Please refer this. I wrote this quite a while back but should be good to
> go.
>
>
> http://codewithkernel.blogspot.my/2014/06/adding-new-system-call-in-linux-x86-and.html
>
> > Nitin
> >
> > ___
> > Kernelnewbies mailing list
> > Kernelnewbies@kernelnewbies.org
> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >
>
>
>
> --
> ---P.K.S
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


system call

2016-04-06 Thread Nitin Varyani
Hi,
  I want to implement a system call as explained in Linux kernel
development by Robert Love.

He does three things
 adding entry to entry.S
adding entry to asm/unistd.h
and adding the system call code to sched.c


and then make + make install

I do not want to implement for all architectures but only for my PC which
is 64 bit. I am not able to locate files entry. S and unistd.h which he is
telling in his tutorial.
Please help me out to figure out the exact steps. Please also mention the
linux kernel version I should use.

Nitin
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Attach my own pid

2016-03-27 Thread Nitin Varyani
rather this will also suffice

if (* pid == NULL*) {
 retval = -ENOMEM;
 pid = alloc_pid(p->nsproxy->pid_ns);
 if (!pid)
 goto bad_fork_cleanup_io;
 }

 p->pid = pid_nr(pid);

On Sun, Mar 27, 2016 at 4:57 PM, Nitin Varyani 
wrote:

> If I do the following thing:
>
> struct pid remote_struct_pid;
> remote_struct_pid.numbers[0].nr=*my_pid*;
> p = copy_process(clone_flags, stack_start, stack_size, child_tidptr,
> *remote_struct_pid*, trace, tls);
>
> and modify the copy_process function little bit (marked in BOLD), it may
> serve my objective.
>
> if (pid != &init_struct_pid *&& pid == NULL*) {
>  retval = -ENOMEM;
>  pid = alloc_pid(p->nsproxy->pid_ns);
>  if (!pid)
>  goto bad_fork_cleanup_io;
>  }
>
>  p->pid = pid_nr(pid);
>
> The pids by kernel are allocated in the range (RESERVED_PIDS,
> PID_MAX_DEFAULT) and I will choose *my_pid* outside this range.
> I will have to modify system calls/kernel to cater to such processes.
>
>
> On Tue, Mar 22, 2016 at 3:55 PM, Bernd Petrovitsch <
> be...@petrovitsch.priv.at> wrote:
>
>> On Die, 2016-03-22 at 01:26 -0400, valdis.kletni...@vt.edu wrote:
>> > On Mon, 21 Mar 2016 16:01:41 +0530, Nitin Varyani said:
>> >
>> > > I am running a master user-level process at Computer 1 which sends a
>> > > process context like code, data, registers, PC, etc as well as
>> *"pid"* to
>> > > slave processes running at other computers. The responsibility of the
>> slave
>> > > process is to fork a new process on order of master process and
>> attach *"pid"
>> > > *given by the master to the new process it has forked. Any system
>> call on
>> > > slave nodes will have an initial check of " Whether the process
>> belongs to
>> > > local node or to the master node?". That is, if kernel at Computer 2
>> pid of
>> > > the process is 1500
>> >
>> > None of that requires actually controlling the PID of the child.
>>
>> Well, I think that the OP wants to map the PIDs with a fixed offset per
>> host. So e.g. the local PID == 14 becomes 20014 on all other nodes.
>> At least for debugging it's easier than some random mappings;-)
>>
>> As for top post: TTBOMK there is no SysCall for doing that.
>> * Perhaps one can achieve something similar with containers - one
>>   container per remote host or so (but I never used containers actively
>>   myself) or (ab)use KVM (does vServer still live?) for local
>>   "pseudo-VMs" (and use there the original PIDs - or so).
>> * The manual page of clone(2) doesn't reveal to me if it's possible to
>>   wish for a PID.
>> * You could clone (pun not intended;-) the fork() syscall and add an
>>   parameter - the PID - to it (and e.g. return -1 if it's already used).
>>
>> BTW I don't know how the rest of the kernel reacts to such artifical
>> PIDs (but you will see;-) outside the "official range".
>>
>> MfG,
>> Bernd
>> --
>> "What happens when you read some doc and either it doesn't answer your
>> question or is demonstrably wrong? In Linux, you say "Linux sucks" and
>> go read the code. In Windows/Oracle/etc you say "Windows sucks" and
>> start banging your head against the wall."- Denis Vlasenko on lkml
>>
>>
>>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Attach my own pid

2016-03-27 Thread Nitin Varyani
If I do the following thing:

struct pid remote_struct_pid;
remote_struct_pid.numbers[0].nr=*my_pid*;
p = copy_process(clone_flags, stack_start, stack_size, child_tidptr,
*remote_struct_pid*, trace, tls);

and modify the copy_process function little bit (marked in BOLD), it may
serve my objective.

if (pid != &init_struct_pid *&& pid == NULL*) {
 retval = -ENOMEM;
 pid = alloc_pid(p->nsproxy->pid_ns);
 if (!pid)
 goto bad_fork_cleanup_io;
 }

 p->pid = pid_nr(pid);

The pids by kernel are allocated in the range (RESERVED_PIDS,
PID_MAX_DEFAULT) and I will choose *my_pid* outside this range.
I will have to modify system calls/kernel to cater to such processes.


On Tue, Mar 22, 2016 at 3:55 PM, Bernd Petrovitsch <
be...@petrovitsch.priv.at> wrote:

> On Die, 2016-03-22 at 01:26 -0400, valdis.kletni...@vt.edu wrote:
> > On Mon, 21 Mar 2016 16:01:41 +0530, Nitin Varyani said:
> >
> > > I am running a master user-level process at Computer 1 which sends a
> > > process context like code, data, registers, PC, etc as well as *"pid"*
> to
> > > slave processes running at other computers. The responsibility of the
> slave
> > > process is to fork a new process on order of master process and attach
> *"pid"
> > > *given by the master to the new process it has forked. Any system call
> on
> > > slave nodes will have an initial check of " Whether the process
> belongs to
> > > local node or to the master node?". That is, if kernel at Computer 2
> pid of
> > > the process is 1500
> >
> > None of that requires actually controlling the PID of the child.
>
> Well, I think that the OP wants to map the PIDs with a fixed offset per
> host. So e.g. the local PID == 14 becomes 20014 on all other nodes.
> At least for debugging it's easier than some random mappings;-)
>
> As for top post: TTBOMK there is no SysCall for doing that.
> * Perhaps one can achieve something similar with containers - one
>   container per remote host or so (but I never used containers actively
>   myself) or (ab)use KVM (does vServer still live?) for local
>   "pseudo-VMs" (and use there the original PIDs - or so).
> * The manual page of clone(2) doesn't reveal to me if it's possible to
>   wish for a PID.
> * You could clone (pun not intended;-) the fork() syscall and add an
>   parameter - the PID - to it (and e.g. return -1 if it's already used).
>
> BTW I don't know how the rest of the kernel reacts to such artifical
> PIDs (but you will see;-) outside the "official range".
>
> MfG,
> Bernd
> --
> "What happens when you read some doc and either it doesn't answer your
> question or is demonstrably wrong? In Linux, you say "Linux sucks" and
> go read the code. In Windows/Oracle/etc you say "Windows sucks" and
> start banging your head against the wall."- Denis Vlasenko on lkml
>
>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Attach my own pid

2016-03-21 Thread Nitin Varyani
struct task_struct {
volatile long state;
void *stack;
...
*pid_t pid;*
...
}
You mean to say that just mapping the *pid_t pid*  will do the job. Does
the linux kernel not store pid somewhere else while forking a child?

On Mon, Mar 21, 2016 at 4:18 PM, Pranay Srivastava 
wrote:

> Nitin,
>
>
> On Mon, Mar 21, 2016 at 4:03 PM, Nitin Varyani 
> wrote:
> > .Continued That is, if kernel at Computer 2 finds that pid of a
> > process requesting a system call is 1500, the request is forwarded to
> slave
> > daemon which in turn contacts with the master daemon. Master daemon
> requests
> > the kernel for the system call and sends the result back to slave daemon.
>
> I don't think doing this by pid is better. It might suit you currently
> but in the long run?
> If you are able to send the whole context, why not map that pid to
> your context internally instead of relying
> on pid which is also visible outside your context.
>
> >
> > On Mon, Mar 21, 2016 at 4:01 PM, Nitin Varyani  >
> > wrote:
> >>
> >> I am trying to create a distributed pid space.
> >>
> >> 0 to 2000 Computer 1
> >> 2001 to 4000 Computer 2
> >> 4001 to 6000 Computer 3
> >>
>
> your pid 2000 shouldn't have to be same pid 2000 on another node. You
> just need the context right?
>
> >> and so on...
> >>
> >> I am running a master user-level process at Computer 1 which sends a
> >> process context like code, data, registers, PC, etc as well as "pid" to
> >> slave processes running at other computers. The responsibility of the
> slave
> >> process is to fork a new process on order of master process and attach
> "pid"
> >> given by the master to the new process it has forked. Any system call on
> >> slave nodes will have an initial check of " Whether the process belongs
> to
> >> local node or to the master node?". That is, if kernel at Computer 2
> pid of
> >> the process is 1500
> >>
> >>
> >>
> >> On Mon, Mar 21, 2016 at 12:23 PM,  wrote:
> >>>
> >>> On Mon, 21 Mar 2016 10:33:44 +0530, Nitin Varyani said:
> >>>
> >>> > Sub-task 1: Until now, parent process cannot control the pid of the
> >>> > forked
> >>> > child. A pid gets assigned as a sequential number by the kernel at
> the
> >>> > time
> >>> > the process is forked . I want to modify kernel in such a way that
> >>> > parent
> >>> > process can control the pid of the forked child.
> >>>
> >>> What does controlling the pid gain you?  To what purpose?
> >>>
> >>> > Sub-task 2: On Linux, you can find the maximum PID value for your
> >>> > system
> >>> > with the following command:
> >>> >
> >>> > $ cat /proc/sys/kernel/pid_max
> >>> >
> >>> > Suppose pid_max=2000 for a system. I want that the parent process
> >>> > should be
> >>> > able to assign a pid which is greater that 2000 to the forked child.
> >>>
> >>> Again, why would you want to do that?
> >>>
> >>> Anyhow...
> >>>
> >>> echo 3000 > /proc/sys/kernel/pid_max
> >>> fork a process that gets a pid over 2000.
> >>>
> >>> Done.
> >>>
> >>> Note that on 32 bit systems, using a pid_max of over 32768 will cause
> >>> various things in /proc to blow up.
> >>>
> >>> I suspect that you need to think harder about what problem you're
> >>> actually
> >>> trying to solve here - what will you do with a controlled child PID?
> Why
> >>> does
> >>> it even matter?
> >>
> >>
> >
> >
> > ___
> > Kernelnewbies mailing list
> > Kernelnewbies@kernelnewbies.org
> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >
>
>
>
> --
> ---P.K.S
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Attach my own pid

2016-03-21 Thread Nitin Varyani
A representative process, that is, a process without any user stack,
register values, PC, etc, with *"pid" *is maintained at the master node.
Now, the process which was migrated to a remote node, in this example
Computer 2, and having process id *"pid", *decides to fork(). It is a
system call and is forwarded in the same way to master process. This
request is forwarded to the representative process with process id *"pid"*.
The representative process forks() leading to a new representative process
with process id *"cpid". *This "cpid" is forwarded to master process which
forwards it to slave process. The slave process forwards *"cpid"* to the
remote process with process id *"pid"*. The remote process with process id
*"pid"* now forks a child and attaches *"cpid" * to its child.

This is overview of what I want to achieve. A small correction in my last
mail.

.Continued That is, if kernel at Computer 2 finds that pid of a
process requesting a system call is 1500, the request is forwarded to slave
daemon which in turn contacts with the master daemon. Master daemon
forwards this information to corresponding representative process which
requests the kernel for the system call and sends the result back to slave
daemon.

On Mon, Mar 21, 2016 at 4:03 PM, Nitin Varyani 
wrote:

> .Continued That is, if kernel at Computer 2 finds that pid of a
> process requesting a system call is 1500, the request is forwarded to slave
> daemon which in turn contacts with the master daemon. Master daemon
> requests the kernel for the system call and sends the result back to slave
> daemon.
>
> On Mon, Mar 21, 2016 at 4:01 PM, Nitin Varyani 
> wrote:
>
>> I am trying to create a distributed pid space.
>>
>> 0 to 2000 Computer 1
>> 2001 to 4000 Computer 2
>> 4001 to 6000 Computer 3
>>
>> and so on...
>>
>> I am running a master user-level process at Computer 1 which sends a
>> process context like code, data, registers, PC, etc as well as *"pid"*
>> to slave processes running at other computers. The responsibility of the
>> slave process is to fork a new process on order of master process and
>> attach *"pid" *given by the master to the new process it has forked. Any
>> system call on slave nodes will have an initial check of " Whether the
>> process belongs to local node or to the master node?". That is, if kernel
>> at Computer 2 pid of the process is 1500
>>
>>
>>
>> On Mon, Mar 21, 2016 at 12:23 PM,  wrote:
>>
>>> On Mon, 21 Mar 2016 10:33:44 +0530, Nitin Varyani said:
>>>
>>> > Sub-task 1: Until now, parent process cannot control the pid of the
>>> forked
>>> > child. A pid gets assigned as a sequential number by the kernel at the
>>> time
>>> > the process is forked . I want to modify kernel in such a way that
>>> parent
>>> > process can control the pid of the forked child.
>>>
>>> What does controlling the pid gain you?  To what purpose?
>>>
>>> > Sub-task 2: On Linux, you can find the maximum PID value for your
>>> system
>>> > with the following command:
>>> >
>>> > $ cat /proc/sys/kernel/pid_max
>>> >
>>> > Suppose pid_max=2000 for a system. I want that the parent process
>>> should be
>>> > able to assign a pid which is greater that 2000 to the forked child.
>>>
>>> Again, why would you want to do that?
>>>
>>> Anyhow...
>>>
>>> echo 3000 > /proc/sys/kernel/pid_max
>>> fork a process that gets a pid over 2000.
>>>
>>> Done.
>>>
>>> Note that on 32 bit systems, using a pid_max of over 32768 will cause
>>> various things in /proc to blow up.
>>>
>>> I suspect that you need to think harder about what problem you're
>>> actually
>>> trying to solve here - what will you do with a controlled child PID? Why
>>> does
>>> it even matter?
>>>
>>
>>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Attach my own pid

2016-03-21 Thread Nitin Varyani
.Continued That is, if kernel at Computer 2 finds that pid of a
process requesting a system call is 1500, the request is forwarded to slave
daemon which in turn contacts with the master daemon. Master daemon
requests the kernel for the system call and sends the result back to slave
daemon.

On Mon, Mar 21, 2016 at 4:01 PM, Nitin Varyani 
wrote:

> I am trying to create a distributed pid space.
>
> 0 to 2000 Computer 1
> 2001 to 4000 Computer 2
> 4001 to 6000 Computer 3
>
> and so on...
>
> I am running a master user-level process at Computer 1 which sends a
> process context like code, data, registers, PC, etc as well as *"pid"* to
> slave processes running at other computers. The responsibility of the slave
> process is to fork a new process on order of master process and attach *"pid"
> *given by the master to the new process it has forked. Any system call on
> slave nodes will have an initial check of " Whether the process belongs to
> local node or to the master node?". That is, if kernel at Computer 2 pid of
> the process is 1500
>
>
>
> On Mon, Mar 21, 2016 at 12:23 PM,  wrote:
>
>> On Mon, 21 Mar 2016 10:33:44 +0530, Nitin Varyani said:
>>
>> > Sub-task 1: Until now, parent process cannot control the pid of the
>> forked
>> > child. A pid gets assigned as a sequential number by the kernel at the
>> time
>> > the process is forked . I want to modify kernel in such a way that
>> parent
>> > process can control the pid of the forked child.
>>
>> What does controlling the pid gain you?  To what purpose?
>>
>> > Sub-task 2: On Linux, you can find the maximum PID value for your system
>> > with the following command:
>> >
>> > $ cat /proc/sys/kernel/pid_max
>> >
>> > Suppose pid_max=2000 for a system. I want that the parent process
>> should be
>> > able to assign a pid which is greater that 2000 to the forked child.
>>
>> Again, why would you want to do that?
>>
>> Anyhow...
>>
>> echo 3000 > /proc/sys/kernel/pid_max
>> fork a process that gets a pid over 2000.
>>
>> Done.
>>
>> Note that on 32 bit systems, using a pid_max of over 32768 will cause
>> various things in /proc to blow up.
>>
>> I suspect that you need to think harder about what problem you're actually
>> trying to solve here - what will you do with a controlled child PID? Why
>> does
>> it even matter?
>>
>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Attach my own pid

2016-03-21 Thread Nitin Varyani
I am trying to create a distributed pid space.

0 to 2000 Computer 1
2001 to 4000 Computer 2
4001 to 6000 Computer 3

and so on...

I am running a master user-level process at Computer 1 which sends a
process context like code, data, registers, PC, etc as well as *"pid"* to
slave processes running at other computers. The responsibility of the slave
process is to fork a new process on order of master process and attach *"pid"
*given by the master to the new process it has forked. Any system call on
slave nodes will have an initial check of " Whether the process belongs to
local node or to the master node?". That is, if kernel at Computer 2 pid of
the process is 1500



On Mon, Mar 21, 2016 at 12:23 PM,  wrote:

> On Mon, 21 Mar 2016 10:33:44 +0530, Nitin Varyani said:
>
> > Sub-task 1: Until now, parent process cannot control the pid of the
> forked
> > child. A pid gets assigned as a sequential number by the kernel at the
> time
> > the process is forked . I want to modify kernel in such a way that parent
> > process can control the pid of the forked child.
>
> What does controlling the pid gain you?  To what purpose?
>
> > Sub-task 2: On Linux, you can find the maximum PID value for your system
> > with the following command:
> >
> > $ cat /proc/sys/kernel/pid_max
> >
> > Suppose pid_max=2000 for a system. I want that the parent process should
> be
> > able to assign a pid which is greater that 2000 to the forked child.
>
> Again, why would you want to do that?
>
> Anyhow...
>
> echo 3000 > /proc/sys/kernel/pid_max
> fork a process that gets a pid over 2000.
>
> Done.
>
> Note that on 32 bit systems, using a pid_max of over 32768 will cause
> various things in /proc to blow up.
>
> I suspect that you need to think harder about what problem you're actually
> trying to solve here - what will you do with a controlled child PID? Why
> does
> it even matter?
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Attach my own pid

2016-03-20 Thread Nitin Varyani
I am reframing my question:
Sub-task 1: Until now, parent process cannot control the pid of the forked
child. A pid gets assigned as a sequential number by the kernel at the time
the process is forked . I want to modify kernel in such a way that parent
process can control the pid of the forked child.

Sub-task 2: On Linux, you can find the maximum PID value for your system
with the following command:

$ cat /proc/sys/kernel/pid_max

Suppose pid_max=2000 for a system. I want that the parent process should be
able to assign a pid which is greater that 2000 to the forked child.

On Mon, Mar 21, 2016 at 12:03 AM,  wrote:

> On Sun, 20 Mar 2016 02:07:29 -0700, Nitin Varyani said:
>
> >  The linux kernel attaches a pid to newly forked process. I want to
> > create a facility by which a process has the option of attaching a new
> pid
> > to its child which is not in the pid space.
>
> Not at all sure what you mean by "not in the pid space", or what you're
> trying to achieve by doing it.
>
> But "pid namespaces" may be what you're looking for.
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Attach my own pid

2016-03-20 Thread Nitin Varyani
Hi,
 The linux kernel attaches a pid to newly forked process. I want to
create a facility by which a process has the option of attaching a new pid
to its child which is not in the pid space.
  Any suggestions of how this can be achieved?
Nitin
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


stack pointer

2016-03-06 Thread Nitin Varyani
struct task_struct { volatile long state; /* -1 unrunnable, 0 runnable, >0
stopped */* void *stack;* atomic_t usage; unsigned int flags; /* per
process flags, defined below */ unsigned int ptrace;


What does the field void *stack indicate here?
Is it the pointer to kernel stack of the process?

Where is the stack pointer for the current process stored in linux?
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: remote system call

2016-03-05 Thread Nitin Varyani
Codes are huge and documentation is negligible. How can I separate whay I
want to achieve from that big code?

On Thu, Mar 3, 2016 at 10:07 PM, Mulyadi Santosa 
wrote:

>
>
> On Thu, Mar 3, 2016 at 6:12 PM, Nitin Varyani 
> wrote:
>
>> Hi,
>>   I want to migrate user context of a process to a remote machine
>> (i.e. registers, code, data, virtual memory and program counter) and when
>> it makes a system call or file i/o, I want to send that request to its home
>> node.
>>
>> That is, the user process executing at remote node will copy desired
>> system call number to %eax of home node and will execute 'int 0x80'. This
>> will generate interrupt 0x80 which should be sent to home node and an
>> interrupt service routine at home node will be called. This routine will
>> execute in ring 0 of home node.
>>
>> A portion of process context which is system dependent has to be kept at
>> the home node.
>>
>> That is, link to open files and link to kernel stack.
>>
>> For eg: the following portion of the task_struct has to be kept at home
>> node
>> /* filesystem information */
>> struct fs_struct *fs;
>> /* open file information */
>> struct files_struct *files;
>>
>>
>>
>> Is it feasible? Can someone show some more light into it?
>>
>> Nitin
>>
>> ___
>> Kernelnewbies mailing list
>> Kernelnewbies@kernelnewbies.org
>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>
>>
> Feasible, yes.
>
> Try to check the source code of MOSIX/OpenMosix or OpenSSI.
>
> Kerrighed is another project which done similar thing too.
>
>
> --
> regards,
>
> Mulyadi Santosa
> Freelance Linux trainer and consultant
>
> blog: the-hydra.blogspot.com
> training: mulyaditraining.blogspot.com
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


remote system call

2016-03-03 Thread Nitin Varyani
Hi,
  I want to migrate user context of a process to a remote machine (i.e.
registers, code, data, virtual memory and program counter) and when it
makes a system call or file i/o, I want to send that request to its home
node.

That is, the user process executing at remote node will copy desired system
call number to %eax of home node and will execute 'int 0x80'. This will
generate interrupt 0x80 which should be sent to home node and an interrupt
service routine at home node will be called. This routine will execute in
ring 0 of home node.

A portion of process context which is system dependent has to be kept at
the home node.

That is, link to open files and link to kernel stack.

For eg: the following portion of the task_struct has to be kept at home node
/* filesystem information */
struct fs_struct *fs;
/* open file information */
struct files_struct *files;



Is it feasible? Can someone show some more light into it?

Nitin
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Distributed Process Scheduling Algorithm

2016-02-17 Thread Nitin Varyani
@ Greg: Since I am very new to the field, with the huge task in hand
and a short time span of 3 months given for this project, I am looking
for specific directions from the linux experts to work on. As far as
efforts are concerned, I am taking out hours together to research into
this area. I do not mind telling this to my professor.  Still, I am
always looking for improvement. I will try to put more endeavor and
seek as less help as possible. I hope you will not mind my reply.
Thanks.

On Wed, Feb 17, 2016 at 9:02 PM, Greg KH  wrote:
> On Wed, Feb 17, 2016 at 04:05:17PM +0530, Nitin Varyani wrote:
>> Rather than trying to go blind folded in getting practical experience
>> of linux programming, I want to gain experience only in relation to my
>> task of creating a distributed process scheduler. What all things
>> should I try to work with to understand the kernel CFS scheduler well?
>> Please provide sufficient literature for the practical work.
>> Also what is the best place to learn about implementing linux containers?
>
> Why are you asking other people to do your research work for you?
> That's pretty rude, does your professor know this is what you are doing?
>
> greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Distributed Process Scheduling Algorithm

2016-02-17 Thread Nitin Varyani
Having got some clarity of what I have to do, I want to now proceed
for a step by step development. What all I know about linux kernels is
a theoretical understanding of its various components (from the book
of Robert Love) but as far as practical is concerned, I know the
following things:
1) Linking modules dynamically to kernel at run time ( outside source
tree and inside source tree)
2) Adding system calls

Rather than trying to go blind folded in getting practical experience
of linux programming, I want to gain experience only in relation to my
task of creating a distributed process scheduler. What all things
should I try to work with to understand the kernel CFS scheduler well?
Please provide sufficient literature for the practical work.
Also what is the best place to learn about implementing linux containers?


On Wed, Feb 17, 2016 at 11:40 AM,   wrote:
> On Wed, 17 Feb 2016 10:21:35 +0530, Nitin Varyani said:
>
>> Actually it is a master's thesis research project as of now. I am ready to
>> boil down to the most basic implementation of distributed linux kernel.
>> Assume there is no network connection and no open files. We can drop even
>> more assumptions if it becomes complicated. Once this basic implementation
>> is successful, we can go ahead with a more complicated version. The next
>> task is to integrate the migration code in the linux kernel. What is the
>> most easy way of implementing it.
>
> If you get it to where you can migrate a process on command controlled by
> a userspace process, the scheduler part will be trivial.
>
> And note that the choice of which process to migrate where is sufficiently
> "policy" that it belongs in userspace - see how cgroups and containers are
> kernel mechanisms that are controlled by userspace.  You want to follow that
> model if you intend for this to be upstreamed rather than just another dead
> master's thesis.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Process scheduling

2016-02-16 Thread Nitin Varyani
It is similar to openMosix but still quite different. Open Mosix is built
on the top of existing linux kernels. The scheduling is taken care by the
existing linux kernels. Open Mosix is responsible for workload
distribution. This project is first of its kind.

On Wed, Feb 17, 2016 at 11:40 AM, Mulyadi Santosa  wrote:

>
>
> On Mon, Feb 15, 2016 at 7:37 PM, Nitin Varyani 
> wrote:
>
>>
>>
>> On Mon, Feb 15, 2016 at 6:06 PM, Nitin Varyani 
>> wrote:
>>
>>> Hi
>>> I have studied LInux kernel CFS scheduling algorithm - the
>>> vruntime, weights, nice value, etc. I am able to understand the code.
>>>  Actually the task given to me is really very huge. I am told to design
>>> a distributed process scheduling algorithm. A very simple implementation of
>>> it will be sufficient for me. Current distributed OS are patch work over
>>> the linux kernels, that is, they are responsible for load balancing through
>>> process migration but the scheduling is taken care by the single machine
>>> linux kernels. My task is to make the scheduling algorithm itself as
>>> distributed. That is a scheduler makes a decision whether to migrate a task
>>> or to keep the task in the current system.  I need some design aspects of
>>> how to achieve it. Another thing which I want to know is that whether this
>>> job is possible for a kernel newbie like me.
>>>
>>> On Sat, Feb 13, 2016 at 3:12 PM, Nitin Varyani >> > wrote:
>>>
>>>> thanks
>>>>
>>>> On Sat, Feb 13, 2016 at 2:19 PM, Henrik Austad 
>>>> wrote:
>>>>
>>>>> On Sat, Feb 13, 2016 at 11:42:57AM +0530, Nitin Varyani wrote:
>>>>> > Hello,
>>>>>
>>>>> Hi Nitin,
>>>>>
>>>>> >  I want to understand the flow of code of process scheduler
>>>>> of
>>>>> > linux kernel. What I have understood is that
>>>>> > The task marks itself as sleeping,
>>>>> > puts itself on a wait queue,
>>>>> > removes itself from the red-black tree of runnable, and
>>>>> > calls schedule() to select a new process to execute.
>>>>> >
>>>>> > for Waking back up
>>>>> > The task is set as runnable,
>>>>> > removed from the wait queue,
>>>>> > and added back to the red-black tree.
>>>>> >
>>>>> > Can I get the details of which function does what? in sched/core.c
>>>>> and in
>>>>> > sched/fair.c
>>>>> > I am concerned only with fair scheduler. There are so many functions
>>>>> in
>>>>> > these two files that I am totally confused.
>>>>>
>>>>> Then core.c and fair.c is the best bet.
>>>>>
>>>>> You could also pick up a copy of Linux kernel development (By Love), it
>>>>> gives a nice introduction to the overall flow of .. well mostly
>>>>> everything.
>>>>> :)
>>>>>
>>>>> In kernel/sched/sched.h you have a struct called 'struct sched_class"
>>>>> which
>>>>> is a set of function-points. This is used by the core machinery to call
>>>>> into scheduling-class specific code. At the bottom of fair.c, you see
>>>>> said
>>>>> struct being populated.
>>>>>
>>>>> Also, if you want to see what really happens, try enabling
>>>>> function-tracing, but limit it to sched-functions only (and
>>>>> sched-events,
>>>>> those are also useful to see what triggers things)
>>>>>
>>>>> mount -t debugfs nodev /sys/kernel/debug
>>>>> cd /sys/kernel/debug/tracing
>>>>> echo 0 > tracing_on
>>>>> echo function > current_tracer
>>>>> echo "sched*" > set_ftrace_filter
>>>>> echo 1 > events/sched/enable
>>>>> echo 1 > tracing_on
>>>>> ... wait for a few secs
>>>>> echo 0 > tracing_on
>>>>>
>>>>> cat trace > /tmp/trace.txt
>>>>>
>>>>> Now, look at trace.txt and correlate it to the scheduler code :)
>>>>>
>>>>> Good luck!
>>>>>
>>>>> --
>>>>> Henrik Austad
>>>>>
>>>>
>>>>
>>>
>>
>>
>
> Please don't top post :) Use bottom post .
>
> Sounds like what you're going to do is highly similar to openMosix. check
> their source code.
>
> Please note that openmosix is patch against 2.4.x linux kernel. When
> they're about to made it compatible to 2.6.x, the project stalls. See Linux
> IPMI project and see if you can help them out
>
> --
> regards,
>
> Mulyadi Santosa
> Freelance Linux trainer and consultant
>
> blog: the-hydra.blogspot.com
> training: mulyaditraining.blogspot.com
>
> This email has been sent from a virus-free computer protected by Avast.
> www.avast.com <https://www.avast.com/sig-email>
> <#1657133857_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Distributed Process Scheduling Algorithm

2016-02-16 Thread Nitin Varyani
if you say "no network connections" and "no open files",
the problem gets a lot easier - but also quickly devolving into a
master's thesis research project rather than anything useful

Actually it is a master's thesis research project as of now. I am ready to
boil down to the most basic implementation of distributed linux kernel.
Assume there is no network connection and no open files. We can drop even
more assumptions if it becomes complicated. Once this basic implementation
is successful, we can go ahead with a more complicated version. The next
task is to integrate the migration code in the linux kernel. What is the
most easy way of implementing it.

On Tue, Feb 16, 2016 at 10:05 PM,  wrote:

> On Tue, 16 Feb 2016 09:42:52 +0100, Dominik Dingel said:
>
> > I wouldn't see things that dark. Also this is an interesting puzzle.
>
> Just pointing out *very real* issues that will require solution, unless
> you add strict bounds like "cannot be using network connections".
>
> Heck, even open files get interesting.  How do you ensure that the
> file descriptor returned by mkstemp() remains valid? (The *really*
> ugly case is programs that do a mkstemp() and then unlink() the result,
> confident that the kernel will clean up when the process exits, as
> there is no longer a file system object to reference
>
> Of course, if you say "no network connections" and "no open files",
> the problem gets a lot easier - but also quickly devolving into a
> master's thesis research project rather than anything useful
>
> Bottom line:  Don't even *think* about changing the scheduler etc
> until you have a functional way to actually move the process.  Doesn't
> matter if you use a kvm approach, or containers, or whatever - if
> you can't do the migrate, you can't even *test* your code that decides
> which process to migrate.
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Distributed Process Scheduling Algorithm

2016-02-16 Thread Nitin Varyani
The essence of the discussion is that :

We can run each process in a container and migrate the container itself.
Migration can be done based on work stealing. As far as communication
between processes in different containers is concerned, can't we use
sockets?

On Tue, Feb 16, 2016 at 3:16 PM, Nitin Varyani 
wrote:

> According to my project requirement, I need a distributed algorithm so
> mesos will not work. But work stealing is the best bargain. It will save
> communication costs. Thankyou. Can you please elaborate on the last part of
> your reply?
>
> On Tue, Feb 16, 2016 at 2:12 PM, Dominik Dingel  > wrote:
>
>> On Tue, 16 Feb 2016 00:13:34 -0500
>> valdis.kletni...@vt.edu wrote:
>>
>> > On Tue, 16 Feb 2016 10:18:26 +0530, Nitin Varyani said:
>> >
>> > > 1) Sending process context via network
>> >
>> > Note that this is a non-trivial issue by itself.  At a *minimum*,
>> > you'll need all the checkpoint-restart code.  Plus, if the process
>> > has any open TCP connections, *those* have to be migrated without
>> > causing a security problem.  Good luck on figuring out how to properly
>> > route packets in this case - consider 4 nodes 10.0.0.1 through 10.0.0.4,
>> > you migrate a process from 10.0.0.1 to 10.0.0.3,  How do you make sure
>> > *that process*'s packets go to 0.3 while all other packets still go to
>> > 0.1.  Also, consider the impact this may have on iptables, if there is
>> > a state=RELATED,CONNECTED on 0.1 - that info needs to be relayed to 0.3
>> > as well.
>> >
>> > For bonus points, what's the most efficient way to transfer a large
>> > process image (say 500M, or even a bloated Firefox at 3.5G), without
>> > causing timeouts while copying the image?
>> >
>> > I hope your research project is *really* well funded - you're going
>> > to need a *lot* of people (Hint - find out how many people work on
>> > VMWare - that should give you a rough idea)
>>
>> I wouldn't see things that dark. Also this is an interesting puzzle.
>>
>> To migrate processes I would pick an already existing solution.
>> Like there is for container. So every process should be, if possible, in
>> a container.
>> To migrate them efficiently without having some distributed shared memory,
>> you might want to look at userfaultfd.
>>
>> So now back to the scheduling, I do not think that every node should keep
>> track
>> of every process on every other node, as this would mean a massive need
>> for
>> communication and hurt scalability. So either you would implement
>> something like work stealing or go for a central entity like mesos. Which
>> could do process/job/container scheduling for you.
>>
>> There are now two pitfalls which are hard enough on their own:
>> - interprocess communication between two process with something different
>> than a socket
>>   in such an case you would probably need to merge the two distinct
>> containers
>>
>> - dedicated hardware
>>
>> Dominik
>>
>>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Distributed Process Scheduling Algorithm

2016-02-16 Thread Nitin Varyani
According to my project requirement, I need a distributed algorithm so
mesos will not work. But work stealing is the best bargain. It will save
communication costs. Thankyou. Can you please elaborate on the last part of
your reply?

On Tue, Feb 16, 2016 at 2:12 PM, Dominik Dingel 
wrote:

> On Tue, 16 Feb 2016 00:13:34 -0500
> valdis.kletni...@vt.edu wrote:
>
> > On Tue, 16 Feb 2016 10:18:26 +0530, Nitin Varyani said:
> >
> > > 1) Sending process context via network
> >
> > Note that this is a non-trivial issue by itself.  At a *minimum*,
> > you'll need all the checkpoint-restart code.  Plus, if the process
> > has any open TCP connections, *those* have to be migrated without
> > causing a security problem.  Good luck on figuring out how to properly
> > route packets in this case - consider 4 nodes 10.0.0.1 through 10.0.0.4,
> > you migrate a process from 10.0.0.1 to 10.0.0.3,  How do you make sure
> > *that process*'s packets go to 0.3 while all other packets still go to
> > 0.1.  Also, consider the impact this may have on iptables, if there is
> > a state=RELATED,CONNECTED on 0.1 - that info needs to be relayed to 0.3
> > as well.
> >
> > For bonus points, what's the most efficient way to transfer a large
> > process image (say 500M, or even a bloated Firefox at 3.5G), without
> > causing timeouts while copying the image?
> >
> > I hope your research project is *really* well funded - you're going
> > to need a *lot* of people (Hint - find out how many people work on
> > VMWare - that should give you a rough idea)
>
> I wouldn't see things that dark. Also this is an interesting puzzle.
>
> To migrate processes I would pick an already existing solution.
> Like there is for container. So every process should be, if possible, in a
> container.
> To migrate them efficiently without having some distributed shared memory,
> you might want to look at userfaultfd.
>
> So now back to the scheduling, I do not think that every node should keep
> track
> of every process on every other node, as this would mean a massive need for
> communication and hurt scalability. So either you would implement
> something like work stealing or go for a central entity like mesos. Which
> could do process/job/container scheduling for you.
>
> There are now two pitfalls which are hard enough on their own:
> - interprocess communication between two process with something different
> than a socket
>   in such an case you would probably need to merge the two distinct
> containers
>
> - dedicated hardware
>
> Dominik
>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Distributed Process Scheduling Algorithm

2016-02-15 Thread Nitin Varyani
No doubt it is really interesting. It is a research project. The project is
related to HPC clusters. I am as of now planning only to make the process
scheduling algorithm distributed. Linux has already implemented SMP using
Completely Fair Scheduler and I was thinking was of extending it for
distributed systems. Two things need to be added to it:
1) Sending process context via network
2) Maintaining a table at each node which stores the load of each remote
node. This table will be used to make a decision whether to send a process
context along the network or not. Thanks for your kind help.


On Mon, Feb 15, 2016 at 10:22 PM, Henrik Austad  wrote:

> On Mon, Feb 15, 2016 at 09:35:28PM +0530, Nitin Varyani wrote:
> >  Hi,
>
> Hi Nitin,
>
> > I am given a task to design a distributed process scheduling algorithm.
> > Current distributed OS are patch work over the linux kernels, that is,
> they
> > are responsible for load balancing through process migration but the
> > scheduling is taken care by the single machine linux kernels.
>
> Hmm, are you talking about HPC clusters or other large machines here? I'm
> not familiar with this, so a few references to existing designs would be
> appreciated.
>
> > My task is to make the scheduling algorithm itself as distributed.
>
> Apart from my comment below, it sounds like a really interesting project.
> Is this a research-project or something commercial?
>
> > That is a scheduler itself makes a decision whether to migrate a task or
> > to keep the task in the current system.  I need some design aspects of
> > how to achieve it. Another thing which I want to know is that whether
> > this job is possible for a kernel newbie like me. Need urgent help. Nitin
>
> Uhm, ok. I think this is _way_ outside the scope of Kernelnewbies, and it
> is definitely not a newbie project.
>
> If you are really serious about this, I'd start with listing all the
> different elements you need to share and then an initial idea as to how to
> share those between individual systems. I have an inkling that you'll find
> out quite fast as to why the current kernel does not support this out of
> the box.
>
> --
> Henrik Austad
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Distributed Process Scheduling Algorithm

2016-02-15 Thread Nitin Varyani
 Hi,
I am given a task to design a distributed process scheduling algorithm.
Current distributed OS are patch work over the linux kernels, that is, they
are responsible for load balancing through process migration but the
scheduling is taken care by the single machine linux kernels. My task is to
make the scheduling algorithm itself as distributed. That is a scheduler
itself makes a decision whether to migrate a task or to keep the task in
the current system.  I need some design aspects of how to achieve it.
Another thing which I want to know is that whether this job is possible for
a kernel newbie like me. Need urgent help.
Nitin
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Process scheduling

2016-02-15 Thread Nitin Varyani
On Mon, Feb 15, 2016 at 6:06 PM, Nitin Varyani 
wrote:

> Hi
> I have studied LInux kernel CFS scheduling algorithm - the
> vruntime, weights, nice value, etc. I am able to understand the code.
>  Actually the task given to me is really very huge. I am told to design a
> distributed process scheduling algorithm. A very simple implementation of
> it will be sufficient for me. Current distributed OS are patch work over
> the linux kernels, that is, they are responsible for load balancing through
> process migration but the scheduling is taken care by the single machine
> linux kernels. My task is to make the scheduling algorithm itself as
> distributed. That is a scheduler makes a decision whether to migrate a task
> or to keep the task in the current system.  I need some design aspects of
> how to achieve it. Another thing which I want to know is that whether this
> job is possible for a kernel newbie like me.
>
> On Sat, Feb 13, 2016 at 3:12 PM, Nitin Varyani 
> wrote:
>
>> thanks
>>
>> On Sat, Feb 13, 2016 at 2:19 PM, Henrik Austad  wrote:
>>
>>> On Sat, Feb 13, 2016 at 11:42:57AM +0530, Nitin Varyani wrote:
>>> > Hello,
>>>
>>> Hi Nitin,
>>>
>>> >  I want to understand the flow of code of process scheduler of
>>> > linux kernel. What I have understood is that
>>> > The task marks itself as sleeping,
>>> > puts itself on a wait queue,
>>> > removes itself from the red-black tree of runnable, and
>>> > calls schedule() to select a new process to execute.
>>> >
>>> > for Waking back up
>>> > The task is set as runnable,
>>> > removed from the wait queue,
>>> > and added back to the red-black tree.
>>> >
>>> > Can I get the details of which function does what? in sched/core.c and
>>> in
>>> > sched/fair.c
>>> > I am concerned only with fair scheduler. There are so many functions in
>>> > these two files that I am totally confused.
>>>
>>> Then core.c and fair.c is the best bet.
>>>
>>> You could also pick up a copy of Linux kernel development (By Love), it
>>> gives a nice introduction to the overall flow of .. well mostly
>>> everything.
>>> :)
>>>
>>> In kernel/sched/sched.h you have a struct called 'struct sched_class"
>>> which
>>> is a set of function-points. This is used by the core machinery to call
>>> into scheduling-class specific code. At the bottom of fair.c, you see
>>> said
>>> struct being populated.
>>>
>>> Also, if you want to see what really happens, try enabling
>>> function-tracing, but limit it to sched-functions only (and sched-events,
>>> those are also useful to see what triggers things)
>>>
>>> mount -t debugfs nodev /sys/kernel/debug
>>> cd /sys/kernel/debug/tracing
>>> echo 0 > tracing_on
>>> echo function > current_tracer
>>> echo "sched*" > set_ftrace_filter
>>> echo 1 > events/sched/enable
>>> echo 1 > tracing_on
>>> ... wait for a few secs
>>> echo 0 > tracing_on
>>>
>>> cat trace > /tmp/trace.txt
>>>
>>> Now, look at trace.txt and correlate it to the scheduler code :)
>>>
>>> Good luck!
>>>
>>> --
>>> Henrik Austad
>>>
>>
>>
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Process scheduling

2016-02-13 Thread Nitin Varyani
thanks

On Sat, Feb 13, 2016 at 2:19 PM, Henrik Austad  wrote:

> On Sat, Feb 13, 2016 at 11:42:57AM +0530, Nitin Varyani wrote:
> > Hello,
>
> Hi Nitin,
>
> >  I want to understand the flow of code of process scheduler of
> > linux kernel. What I have understood is that
> > The task marks itself as sleeping,
> > puts itself on a wait queue,
> > removes itself from the red-black tree of runnable, and
> > calls schedule() to select a new process to execute.
> >
> > for Waking back up
> > The task is set as runnable,
> > removed from the wait queue,
> > and added back to the red-black tree.
> >
> > Can I get the details of which function does what? in sched/core.c and in
> > sched/fair.c
> > I am concerned only with fair scheduler. There are so many functions in
> > these two files that I am totally confused.
>
> Then core.c and fair.c is the best bet.
>
> You could also pick up a copy of Linux kernel development (By Love), it
> gives a nice introduction to the overall flow of .. well mostly everything.
> :)
>
> In kernel/sched/sched.h you have a struct called 'struct sched_class" which
> is a set of function-points. This is used by the core machinery to call
> into scheduling-class specific code. At the bottom of fair.c, you see said
> struct being populated.
>
> Also, if you want to see what really happens, try enabling
> function-tracing, but limit it to sched-functions only (and sched-events,
> those are also useful to see what triggers things)
>
> mount -t debugfs nodev /sys/kernel/debug
> cd /sys/kernel/debug/tracing
> echo 0 > tracing_on
> echo function > current_tracer
> echo "sched*" > set_ftrace_filter
> echo 1 > events/sched/enable
> echo 1 > tracing_on
> ... wait for a few secs
> echo 0 > tracing_on
>
> cat trace > /tmp/trace.txt
>
> Now, look at trace.txt and correlate it to the scheduler code :)
>
> Good luck!
>
> --
> Henrik Austad
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Process scheduling

2016-02-12 Thread Nitin Varyani
Hello,
 I want to understand the flow of code of process scheduler of
linux kernel. What I have understood is that
The task marks itself as sleeping,
puts itself on a wait queue,
removes itself from the red-black tree of runnable, and
calls schedule() to select a new process to execute.

for Waking back up
The task is set as runnable,
removed from the wait queue,
and added back to the red-black tree.

Can I get the details of which function does what? in sched/core.c and in
sched/fair.c
I am concerned only with fair scheduler. There are so many functions in
these two files that I am totally confused.
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Process Scheduling

2016-02-08 Thread Nitin Varyani
Hi,
  I am new to kernel source. I want to plugin a new process scheduling
algorithm. Can someone elaborate the steps to do it?
Nitin
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies