[EMAIL PROTECTED] said:
> > But in the case of an application which fits in main memory, and
> > has been running for a while (so all pages are present and
> > dirty), all you'd really have to do is verify the page tables are
> > in the proper state and skip the TLB flush,
> a. when a user app wants to receive some data, it allocates
> memory(using malloc) and waits for the hw to do zero-copy read. The kernel
> does not allocate physical page frames for the entire memory region
> allocated. We need to lock the memory (and locking is expensive due to
>
> But in the case of an application which fits in main memory, and
> has been running for a while (so all pages are present and
> dirty), all you'd really have to do is verify the page tables are
> in the proper state and skip the TLB flush, right?
We reall
[EMAIL PROTECTED] said:
> > A couple of concerns I have:
> > * How to pin or pagelock the application buffer without
> > making a kernel transition.
>
> You need to pin them in advance. And pinning pages is _expensive_ so you dont
> want to keep pinning/unpinning pages
I can't convince myself w
> A couple of concerns I have:
> * How to pin or pagelock the application buffer without
> making a kernel transition.
You need to pin them in advance. And pinning pages is _expensive_ so you dont
want to keep pinning/unpinning pages
> * Assuming the memory can be locked down, how can a list
> Thats exactly my point, we need to define a new protocol family to
> support it. This means that all applications using PF_INET needs to be
> changed and recompiled. My basic argument goes like this if hardware can
Thanks to the magic of shared libraries and LD_PRELOAD a library hook can
>> technology is Infiniband . In Infiniband, the hardware supports
IPv6 . For
>> this type of devices there is no need for software TCP/IP. But
for
>> networking application, which mostly uses sockets, there is a
performance
>> penalty with using software TCP/IP ove
> For the case where the routing will be external. Thats conveniently
> something
> you can deduce in advance. In theory nothing stops you implementing this.
> Conventionally you would do that with BSD sockets by implementing a new
> socket family PF_INFINIBAND. You might then choose to make the s
> different topology subnets. Fabrics like Infiniband provide security on
> hardware, so there is no need to worry about it. The simple point is that
> hw supports TCP/IP, then why do we need a software TCP/IP over it?
For the case where the routing will be external. Thats conveniently something
- Received message begins Here -
>
>
> > Doesn't this bypass all of the network security controls? Granted
> - it is
> > completely reasonable in a dedicated environment, but I would
> think the
> > security loss would prevent it from being used for most usag
> technology is Infiniband . In Infiniband, the hardware supports IPv6 . For
> this type of devices there is no need for software TCP/IP. But for
> networking application, which mostly uses sockets, there is a performance
> penalty with using software TCP/IP over this hardware.
IPv6 is only the
> Doesn't this bypass all of the network security controls? Granted
- it is
> completely reasonable in a dedicated environment, but I would
think the
> security loss would prevent it from being used for most usage.
Direct Sockets makes sense only in clustering (se
> > Define 'direct sockets' firstly.
> Direct Sockets is the ablity by which the application(using sockets)
> can use the hardwares features to provide connection, flow control,
> etc.,instead of the TCP and IP software module. A typical hardware
> technology is Infiniband . In Infini
> Define 'direct sockets' firstly.
Direct Sockets is the ablity by which the application(using sockets)
can use the hardwares features to provide connection, flow control,
etc.,instead of the TCP and IP software module. A typical hardware
technology is Infiniband . In Infiniband,
> With the advent of VI and Infiniband, there is a growing need to support =
> Sockets over such new technologies. I studied recent performance =
> analysis of sockets vs direct sockets and found that there is a 250% =
> performance hike and 30% decrease in latency time. Also CPU bandwidth is =
>
15 matches
Mail list logo