Re: FWD: 3 NIC cards problem

2001-03-28 Thread Lee Chin

Hi,
Thanks!!! That worked.  Now I have one more problem... I am using non
blcking sockets (set via fcntl).

And I am using select (with a 20 second timeout) to see when data is
available on the socket.  I have 600 clients hitting my web server.

Quite frequently, what happens is that some of the sockets that I am waiting
on in the select (read or write) just dont have any activity in them for
more than 20 seconds or so its like the client never sent any data over
or is still waiting to connect.

What could I be doing wrong (what are the common mistakes?)

Thanks
Lee

--Original Message--
From: William T Wilson <[EMAIL PROTECTED]>
To: Lee Chin <[EMAIL PROTECTED]>
Sent: March 28, 2001 9:58:35 PM GMT
Subject: Re: FWD: 3 NIC cards problem


On Wed, 28 Mar 2001, Lee Chin wrote:

> I have a program listening for socket connections on 192.168.1.1, port 80.
>
> What I want to do is have incomming connection requets for IP 192.168.2.1
> and 192.168.3.1 on port 80 also be handled by my server running on
> 192.168.1.1:80
>
> How do I do this in Linux?

If you use INADDR_ANY in your sockaddr struct that you pass to bind,
instead of your IP address, it should listen on all network interfaces.


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



FWD: 3 NIC cards problem

2001-03-28 Thread Lee Chin

I am posting this mail here because I have tried posting on many news
groups, but no one seems to know how.  Also, I did read documentation but
could not figure out how to do this in Linux 2.4 kernel

--Original Message--
From: Lee Chin <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: 3 NIC cards problem


Hi,
I have a system with 3 NIC cards, each on a seperate subnet, with IP
addresses 192.168.1.1, 192.168.2.1 and 192.168.3.1.

I have a program listening for socket connections on 192.168.1.1, port 80.

What I want to do is have incomming connection requets for IP 192.168.2.1
and 192.168.3.1 on port 80 also be handled by my server running on
192.168.1.1:80

How do I do this in Linux?

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



socket close problems

2001-03-20 Thread Lee Chin

Hi,
On linux I have the following problem:
I accept connections from client sockets, read the request and send data
back and close the socket.

After a while, I run out of file descriptors... and when I run netstat, all
my connections to the clients are in state CLOSING...  and I know the client
has received all the data and disconnected too.

What could I be doing wrong?  The socket is set obtained via the accept
system call.  I set the socket to non blocking via fcntl and use
SO_REUSEADDR via setsockopt... and after using the socket for read and
write, I simply call shutdown followed by a close

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Linux's implementation of poll() not scalable?

2000-10-24 Thread Lee Chin

There is only one thiong I don't understand about this... why can't we
re-implement the poll() implementation of Linux instead of introducing
another system call?

If I understood Linux correctly, what he is saying is that the bind_event
system call is needed to give the kernel a hint that the application is
interested in a certain event associated with a file descriptor.

If the kernel kept such an event queue per process any way (as soon as the
process opened the file/socket)... then the poll implementation would be
exactly like the get_events system call.  What is wrong with this?

Thanks
Lee

On Mon, 23 Oct 2000, Linus Torvalds wrote:
>
> > What is your favourite interface then ?
>
> I suspect a good interface that can easily be done efficiently would
> basically be something where the user _does_ do the equivalent of a
> read-only mmap() of poll entries - and explicit and controlled
> "add_entry()" and "remove_entry()" controls, so that the kernel can
> maintain the cache without playing tricks.

Actually, forget the mmap, it's not needed.

Here's a suggested "good" interface that would certainly be easy to
implement, and very easy to use, with none of the scalability issues that
many interfaces have.

First, let's see what is so nice about "select()" and "poll()". They do
have one _huge_ advantage, which is why you want to fall back on poll()
once the RT signal interface stops working. What is that?

Basically, RT signals or any kind of event queue has a major fundamental
queuing theory problem: if you have events happening really quickly, the
events pile up, and queuing theory tells you that as you start having
queueing problems, your latency increases, which in turn tends to mean
that later events are even more likely to queue up, and you end up in a
nasty meltdown schenario where your queues get longer and longer.

This is why RT signals suck so badly as a generic interface - clearly we
cannot keep sending RT signals forever, because we'd run out of memory
just keeping the signal queue information around.

Neither poll() nor select() have this problem: they don't get more
expensive as you have more and more events - their expense is the number
of file descriptors, not the number of events per se. In fact, both poll()
and select() tend to perform _better_ when you have pending events, as
they are both amenable to optimizations when there is no need for waiting,
and scanning the arrays can use early-out semantics.

So sticky arrays of events are good, while queues are bad. Let's take that
as one of the fundamentals.

So why do people still like RT signals? They do have one advantage, which
is that you do NOT have that silly array traversal when there is nothing
to do. Basically, the RT signals kind of approach is really good for the
cases where select() and poll() suck: no need to traverse mostly empty and
non-changing arrays all the time.

It boils down to one very simple rule: dense arrays of sticky status
information is good. So let's design a good interface for a dense array.

Basically, the perfect interface for events would be

struct event {
unsigned long id; /* file descriptor ID the event is on */
unsigned long event; /* bitmask of active events */
};

int get_events(struct event * event_array, int maxnr, struct timeval
*tmout);

where you say "I want an array of pending events, and I have an array you
can fill with up to 'maxnr' events - and if you have no events for me,
please sleep until you get one, or until 'tmout'".

The above looks like a _really_ simple interface to me. Much simpler than
either select() or poll(). Agreed?

Now, we still need to inform the kernel of what kind of events we want, ie
the "binding" of events. The most straightforward way to do that is to
just do a simple "bind_event()" system call:

int bind_event(int fd, struct event *event);

which basically says: I'm interested in the events in "event" on the file
descriptor "fd". The way to stop being interested in events is to just set
the event bitmask to zero.

Now, the perfect interface would be the above. Nothing more. Nothing
fancy, nothing complicated. Only really simple stuff. Remember the old
rule: "keep it simple, stupid".

The really nice part of the above is that it's trivial to implement. It's
about 50 lines of code, plus some simple logic to various drivers etc to
actually inform about the events. The way to do this simply is to limit it
in very clear ways, the most notable one being simply that there is only
one event queue per process (or rather, per "struct files_struct" - so
threads would automatically share the event queue). This keeps the
implementation simple, but it's also what keeps the interfaces simple: no
queue ID's to pass around etc.

Implementation is straightforward: the event queue basically consists of

- a queue head in "struct files_struct", initially empty.

- doing a "bind_event()" basically adds a fasync entry to the file
structure, but rather than cause a signal, it just looks a

get_empty_filp

2000-10-02 Thread Lee Chin

Hello All,
I am seeing a bug in get_empty_filp (fs/file_table.c) where
files_stat.nr_free_files is out of sync with respect to the actual number of
elements in free_list.

More precicely, for some reason, free_list became empty (free_list.next and
free_list.prev pointed back to free_list) but files_stat.nr_free_files was
180.  So the code list_entry(free_list.next...) returned a bad pointer (in
this case a pointer to free_list) and the memset in the get_empty_filp
overwrote the files_lock.

As far as I can see, one way this can happen is if in _fput, the list_del
and list_add routines took the *file off of teh free_list and put it back on
the free_list, causing the statement files_stat.nr_free_files++ to be out of
sync.

My question is... can anyone call _fput where the *file parameter is already
on the free_list?

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: maximum number of sockets

2000-09-20 Thread Lee Chin

Hello,
Yes, I know this can be done in older kernels, however in 2.4.0-test8, I DO
NOT see a /proc/sys/fs/inode-max!

I also do not see any changes listed in the Documentation.

Thanks,
Lee

--Original Message--
From: Dan Kegel <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Sent: September 20, 2000 7:00:22 AM GMT
Subject: Re: maximum number of sockets


Lee Chin ([EMAIL PROTECTED]) wrote:
> How do I increase the maximum number of socket connections I can have open
> in the 2.4 series kernel?

See http://www.kegel.com/c10k.html#limits.filehandles

- Dan


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



inode_max in 2.4

2000-09-19 Thread Lee Chin

Hello,
I searched Documents and couldn't find what /proc/sys/fs/inode_max has been
changed to... because after 800 simultaneous open socket connections I get a
"Too many open files" EMFILE error

Thanks,
lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



maximum number of sockets

2000-09-19 Thread Lee Chin

Hello,
How do I increase the maximum number of socket connections I can have open
in the 2.4 series kernel?

Please let me know which list to post these types of questions to.

Thanks,
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[BUG] network problems in 2.4 series

2000-09-19 Thread Lee Chin

Hello,
I have a program that makes HTTP requests in a loop to a box runing Linux. 
It goes through another Linux box, which is using proxy ARP and is connected
to the client and the web server using a cross over cable

[CLIENT][PROXY][WEBSERVER]

When the proxy machine uses 2.2 series kernel, I see a certain bit rate,
call it X.  X stays constant always, no matter how many times I run my
workload.

When I upgraded to 2.4-test8, I see the following behavior:
Initial run (fresh boot) I see a speed greater than X (4X)  subsequent runs,
the bit rate keeps dropping and becomes 1/3X!  The only way to get back the
first bit rate is to reboot!

Please some one tell me if this issue will be resolved.

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



socket connect problems in latest kernel

2000-09-17 Thread Lee Chin

Hello,

I have a heavy workload to benchmark a proxy server and it generates over
1000 simultaneous sessions requesting files of different sizes.
I have two seperate problems that I would like clarified:

1. On the newer versions of the kernel, 2.4.XXX I see way too many connect
requests timing out when the load (simultaneous connections and time) is
increased... even when other connections finish, the ones that were hanging
on the connect still continue to hang.  My question is
a. Why does this happen more in the 2.4 series
b. Should application programs really take care of connect time outs them
selves?

2. Second problem.  I am using IP masquerading and proxy ARP.  When going
through the proxy ARP machine, once in a while a connect completely gets
lost and the proxy machine never sees the request at all!  And this does not
happen at all in the 2.2 kernel series at all... no matter how bad the load
is.

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Linux connect problems

2000-09-13 Thread Lee Chin

Hello,

I have a heavy workload to benchmark a proxy server and it generates over
1000 simultaneous sessions requesting files of different sizes.
I have two seperate problems that I would like clarified:

1. On the newer versions of the kernel, 2.4.XXX I see way too many connect
requests timing out when the load (simultaneous connections and time) is
increased... even when other connections finish, the ones that were hanging
on the connect still continue to hang.  My question is
a. Why does this happen more in the 2.4 series
b. Should application programs really take care of connect time outs them
selves?

2. Second problem.  I am using IP masquerading and proxy ARP.  When going
through the proxy ARP machine, once in a while a connect completely gets
lost and the proxy machine never sees the request at all!  And this does not
happen at all in the 2.2 kernel series at all... no matter how bad the load
is.

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Linux connect problems

2000-09-13 Thread Lee Chin

Hello,

I have a heavy workload to benchmark a proxy server and it generates over
1000 simultaneous sessions requesting files of different sizes.
I have two seperate problems that I would like clarified:

1. On the newer versions of the kernel, 2.4.XXX I see way too many connect
requests timing out when the load (simultaneous connections and time) is
increased... even when other connections finish, the ones that were hanging
on the connect still continue to hang.  My question is
a. Why does this happen more in the 2.4 series
b. Should application programs really take care of connect time outs them
selves?

2. Second problem.  I am using IP masquerading and proxy ARP.  When going
through the proxy ARP machine, once in a while a connect completely gets
lost and the proxy machine never sees the request at all!

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



write_space in kernel

2000-09-11 Thread Lee Chin

Hello,
I have a call beack registered on write_space in kernel, so when I do a
asynchronous sock_sendmsg in the kernel, I get notified.  However, I want to
know how much data was sent on that socket, so I can free the socket after
all data has been sent.  How do I check for this condition?

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



write_space in kernel

2000-09-11 Thread Lee Chin

Hello,
I have a call beack registered on write_space in kernel, so when I do a
asynchronous sock_sendmsg in the kernel, I get notified.  However, I want to
know how much data was sent on that socket, so I can free the socket after
all data has been sent.  How do I check for this condition?

Thanks
Lee


__
FREE Personalized Email at Mail.com
Sign up at http://www.mail.com/?sr=signup

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/