Re: Linux Kernel Networking document (free, 178 pages doc)

2013-02-02 Thread Shubham Sharma
Hi,

I understand that ext2 and ext3 are kind of obsolete now. But AFAIK, there
is not much difference in ext3 and ext4.

Moreover for a newbie , it is better to start with ext3. What you think ?

Regards
Shubham

On Fri, Feb 1, 2013 at 2:15 AM, Rami Rosen roszenr...@gmail.com wrote:

 Hi,
 Have you considered to start with ext4?
 it seems that ext3, ext2 are a bit out of fashion,

 Regards,
 Rami Rosen
 http://ramirose.wix.com/ramirosen


 On Thu, Jan 31, 2013 at 8:58 PM, shubham kernel.shub...@gmail.com wrote:
  Thanks Rami,
 
  I am also trying to understand ext3 and write some document for the same.
 
  Regards
  Shubham
 
 
  On 31-Jan-13 12:51 AM, Rami Rosen wrote:
 
  HI,
  I will try to write something for Linux Filesystems  (and maybe for
  other subsystems) but this will probably take a lot of time.
 
  Regards,
  Rami Rosen
  http://ramirose.wix.com/ramirosen
 
 
  On Wed, Jan 30, 2013 at 5:44 PM, shubham kernel.shub...@gmail.com
 wrote:
 
  Thanks for sharing the document.
 
  I hope we could have such documents for other subsystems as well.
 
  Regards
  Shubham
 
 
  On 28-Jan-13 10:23 PM, Rami Rosen wrote:
 
  Hi everyone,
  You can find here an up to date and detailed document in pdf (178
  pages) about Linux Kernel Networking; going deep into design and
  implementation details as well as the theory behind it:
  http://media.wix.com/ugd//295986_931b8bcf34d93419d46e05b5aa5d0216.pdf
 
  I believe that developers/sysadmins/researchers/students may find help
  with it.
 
 
  regards,
  Rami Rosen
 
  http://ramirose.wix.com/ramirosen
 
  ___
  Kernelnewbies mailing list
  Kernelnewbies@kernelnewbies.org
  http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
 
 
 

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


kernel stack memory

2012-09-13 Thread shubham sharma
Hi,

As far as i know, the size of stack allocated in the kernel space is
8Kb for each process. But in case i use more than 8Kb of memory from
the stack then what will happen? I think that in that case the system
would crash because i am accessing an illegal memory area. I wrote
kernel module in which i defined an integer array whose size was 8000.
But still it did not crash my system. Why?

The module i wrote was as follows:

#include linux/kernel.h
#include linux/module.h

int __init init_my_module(void)
{
int arr[8000];
printk(%s:%d\tmodule initilized\n, __func__, __LINE__);
arr[1] = 1;
arr[4000] = 1;
arr[7999] = 1;
printk(%s:%d\tarr[1]:%d, arr[4000]:%d, arr[7999]:%d\n, __func__,
__LINE__, arr[1], arr[4000], arr[7999]);
return 0;
}

void __exit cleanup_my_module(void)
{
printk(exiting\n);
return;
}

module_init(init_my_module);
module_exit(cleanup_my_module);

MODULE_LICENSE(GPL);

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


query regarding inode pages

2011-07-13 Thread shubham sharma
I am trying to write a memory based file system. The file system is
intended to create files/directories and write their contents only on
pages. For this I have used the function grab_cache_page() function to
get a new locked page in case the page does not exists in the radix
tree of the inode.

As the filesystem is memory based and all the data exists only on the
memory, so I don't release the lock on the page as I fear that the
pdflush daemon can swap out the pages on which I have written data and
I may never see that data again. I unlock all the pages of all the
inodes of the file system in the kill_sb function once the file system
is being unmounted. But the problem I am facing is that if I open a
file in which I have already written something (and its pages are
locked), the open system call in turn calls the __lock_page() function
which waits for the pages belonging to the inode to get unlocked.
Hence the system call stalls indefinitely. I want to know if there is
a mechanism by which I can prevent the pdflush daemon from swapping
the pages that my filesystem refers to??

What limited knowledge I have on pdflush daemon, I guess that the
daemon searches for the dirty inodes in the list maintained per super
block. So if I remove all the inodes from this list (or not add any of
my inodes I create in the list), will I be able to prevent the daemon
from flushing data from my file system?

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: query regarding inode pages

2011-07-13 Thread shubham sharma
Hi Joel,

On Wed, Jul 13, 2011 at 9:28 PM, Joel A Fernandes agnel.j...@gmail.com wrote:
 [CC'ing list]

 Hi Shubham,

 I am not very familiar with the code for pdflush. But wasn't it
 superseded by bdflush (or similar) in recent kernels?

I don't know about the bdflush daemon, but I guess that pdflush daemon
has been superseded in recent kernel. I am working on 2.6.18 kernel.


 On Wed, Jul 13, 2011 at 10:45 AM, shubham sharma shubham20...@gmail.com 
 wrote:
 I am trying to write a memory based file system. The file system is
 intended to create files/directories and write their contents only on
 pages. For this I have used the function grab_cache_page() function to
 get a new locked page in case the page does not exists in the radix
 tree of the inode.

 As the filesystem is memory based and all the data exists only on the
 memory, so I don't release the lock on the page as I fear that the
 pdflush daemon can swap out the pages on which I have written data and
 I may never see that data again. I unlock all the pages of all the
 inodes of the file system in the kill_sb function once the file system
 is being unmounted. But the problem I am facing is that if I open a
 file in which I have already written something (and its pages are
 locked), the open system call in turn calls the __lock_page() function
 which waits for the pages belonging to the inode to get unlocked.
 Hence the system call stalls indefinitely. I want to know if there is
 a mechanism by which I can prevent the pdflush daemon from swapping
 the pages that my filesystem refers to??

 I'm not sure if pdflush is what swaps pages? Isn't that the role of
 kswapd? pdflush AFAIK just writes dirty pages back to the backing
 device.

 I think what you're referring to is a certain page replacement
 behavior in low memory conditions that you want to protect your
 filesystem against.

Yes you got it correct. I want to protect my filesystem against low
memory conditions.

I will be interested in responses from others
 about this, and will dig into the code during off office hours.

 Maybe tmpfs is a good reference for your work?

Thanks for the lead. I will dig in that now.

 Thanks,
 Joel

Thanks,
Shubham

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


query regarding kernel daemon

2011-04-04 Thread shubham sharma
I was trying some experiments with the kernel daemon. The experiment works
as follows:



A daemon sleeps in the background. User can enter a string through the proc
interface. Whenever a string is entered, the daemon is woke. The daemon
keeps a copy of the last entered string in a variable. Initially the
variable is initialized to NULL. When the daemon wakes, it checks if the
string entered is same as the previous one or a new string is entered. When
the string is entered, in case the new or the old strings are NULL, or in
case the entered string is same as the old string the daemon goes back to
sleep (with the help of the function interruptible_sleep_on().



The problem I am facing is that when I enter the string the second time, the
system stalls. I added some sleeps in the code and figured out that when the
proc function wakes up the daemon, the system stalls.



I have attached the module code with the mail. Any suggestion for the error
would be a great help.



Shubham
#include linux/module.h
#include linux/kernel.h
#include linux/proc_fs.h
#include linux/stat.h
#include linux/init.h
#include linux/sched.h
#include linux/wait.h
#include asm/uaccess.h
#include linux/delay.h

typedef struct {
	char			*old_file_name;
	int			old_file_name_length;
	char			*new_file_name;
	int			new_file_name_length;
	pid_t			thread_pid;
	int			thread_present;
	wait_queue_head_t	thread_sv;	/*to make the thread sleep on certain condition */
//	wait_queue_head_t	multi_thread_sv[1]; /* in case of multi threading, second thread will sleep until first is awake */
		/* but will this be possible? a module can be loaded only once!!! */
} thread_struct;

#define prc_file_name test_proc_file_name

thread_struct *ts = NULL;

char *file_name = NULL;
int file_name_size = 0;

int my_thread_fun (void *arg) {

	int ret;
	thread_struct *my_thread = (thread_struct *) arg;
	if (!try_module_get(THIS_MODULE)) {
		printk(KERN_INFO %s:\tunable to get the module\n,__FUNCTION__);
		ret = -EINVAL;
		goto out;
	}
	daemonize(test_thread);
	allow_signal(SIGKILL);
	my_thread-thread_present = 1;
	printk(KERN_INFO %s:\tthread id = %d name of the old file= %s, name of the new file = %s\n,
__FUNCTION__, my_thread-thread_pid, my_thread-new_file_name, my_thread-old_file_name);
	for (;;) {
		if (signal_pending(current)) {
			my_thread-thread_pid = 0;
			printk(KERN_INFO %s:\tgot a signal to commit suicide \n,__FUNCTION__);
			ret = -EINTR;
			goto out;
		}

		if (my_thread-new_file_name == NULL || my_thread-old_file_name == NULL) {
			printk(KERN_INFO %s:\ttime to sleep as nothing to compare\n,__FUNCTION__);
			interruptible_sleep_on(my_thread-thread_sv);
		}

		printk(KERN_INFO %s:\tthread id = %d name of the old file= %s, name of the new file = %s\n,
__FUNCTION__, my_thread-thread_pid, my_thread-new_file_name, my_thread-old_file_name);
	
		/* adding this continue because the thread may
		 * sleep on the above condition and when it wakes
		 * it needs to check again if the old and new file
		 * name are NULL or not
		 */
		continue;

		if (my_thread-new_file_name != NULL) {
			printk(KERN_INFO %s:\tname of the new file is %s\n,__FUNCTION__, my_thread-new_file_name);
		}

		if (my_thread-old_file_name != NULL) {
			printk(KERN_INFO %s:\tname of the old file is %s\n,__FUNCTION__, my_thread-old_file_name);
		}
		
		printk(KERN_INFO %s:\told file(%s) new file(%s) going for comparision\n,
__FUNCTION__, my_thread-old_file_name, my_thread-new_file_name);
		ssleep(2);
//		schedule_timeout(HZ);

		if (!memcmp(my_thread-old_file_name, my_thread-new_file_name,my_thread-new_file_name_length)) {
//(my_thread-old_file_name_length  my_thread-new_file_name_length ?
// my_thread-old_file_name_length : my_thread-new_file_name_length))) {
			ssleep(2);
			printk(KERN_INFO %s:\tname of the new file (%s) is same as old file (%s) sleeping\n,
		__FUNCTION__, my_thread-new_file_name, my_thread-old_file_name);
			interruptible_sleep_on(my_thread-thread_sv);
		}
		ssleep(2);
		printk(KERN_INFO %s:\tnew file(%s)\n,__FUNCTION__, my_thread-new_file_name);
//		schedule_timeout(HZ);
		my_thread-old_file_name = my_thread-new_file_name;
		my_thread-old_file_name_length = my_thread-new_file_name_length;
		ssleep(2);
		printk(KERN_INFO %s:\trounding off\n,__FUNCTION__);
	}
out:
	my_thread-thread_present = 0;
	module_put(THIS_MODULE);
	return ret;
}

int read_proc_file(
	char *buf,
	char **buf_loc,
	off_t offset,
	int buf_len,
	int *eof,
	void *data)
{

	int ret;
	printk(KERN_INFO %s reading from the proc file %s\n, __FUNCTION__,prc_file_name);
	if (offset  0) {
		return 0;
	} else {
		memcpy(buf, file_name, file_name_size);
		ret = file_name_size;
	}
	return ret;
}

int write_proc_file (
	struct file *filp,
	const char __user *buf,
	unsigned long len,
	void *data) {

	printk(KERN_INFO %s:\tlength of file name = %ld\n,__FUNCTION__, len);
	printk(KERN_INFO %s:\twriting into the proc file\n, __FUNCTION__);
	file_name_size = len;

	file_name = (char *)