Hi Marc, Orion.
The reason for introducing these changes is for the purpose of
debugging file descriptor leaks in Passenger. There have been several
instances of file descriptor leaks inside Passenger in recent history.
The 'lsof' tool allows us to see that there is a file descriptor leak,
but the
On Aug 29, 2014 1:30 PM, "Marc Lehmann" wrote:
>
> On Fri, Aug 29, 2014 at 12:07:57AM +0200, Hongli Lai
wrote:
> > With "lost" I do not mean that libeio closes it. I meant that the file
> > descriptor still exists, but nobody knows what its number i
On Thu, Aug 28, 2014 at 5:41 AM, Marc Lehmann wrote:
>> Here's how libeio loses the fd:
>
> libeio can't lose the fd, as libeio doesn't own it. the caller creates the
> fd and needs to take care of it. if you want it closede, you cna close it
> in the destroy callback.
>
>> 7. ETP_FINISH sees that
On Tue, Aug 19, 2014 at 1:06 AM, Marc Lehmann wrote:
> On Mon, Aug 18, 2014 at 01:28:55PM +0200, Hongli Lai
> wrote:
>> 1. If I cancel eio_open(), and the system call has already finished,
>> then if the callback isn't called, the file descriptor becomes lost
>>
On Mon, Aug 18, 2014 at 8:33 PM, james wrote:
> Why not call it yourself, when you cancel?
Mainly because of reason 1. If the open() system call has already
finished, then libeio is the only thing that knows what the file
descriptor is. The calling code has no way of knowing what the file
descrip
On Mon, Aug 18, 2014 at 6:34 AM, Marc Lehmann wrote:
> I think it is the eio documentation which is in error here (after
> all, it's still unfinished and uncorrected), and plan to fix this by
> correcting eio.pod, unless you can present an overriding reason to change
> the behaviour - I can't see
According to the libeio documentation, the callback will still be
called even if the request has been cancelled. The libeio code checks
for the cancel flag, and does not call the callback if it's set. This
happens even if the system call has already finished.
The attached patch fixes the problem.
According to the documentation:
"at the time of this writing, [kqueue] was broken on all BSDs except
NetBSD (usually [kqueue] doesn't work reliably with anything but
sockets and pipes, except on Darwin, where of course it's completely
useless)"
But I have no idea whether this information is up to
On Mon, Jan 2, 2012 at 11:49 PM, Colin McCabe wrote:
> The problem is that there's no way for the programmer to distinguish
> "the data that really needs to be shared" from the data that shouldn't
> be shared between threads. Even in C/C++, all you can do is insert
> padding and hope for the best
On Mon, Jan 2, 2012 at 10:29 AM, Yaroslav wrote:
> (2.1) At what moments exactly this syncronizations occur? Is it on every
> assembler instruction, or on every write to memory (i.e. on most variable
> assignments, all memcpy's, etc.), or is it only happening when two threads
> simultaneously work
Thanks for the reply Marc. :)
I think I get the gist of what you're saying. You're advocating a few
processes (up to the number of CPU cores) with user-space threads to
avoid kernel context switch overhead. I already agree with that.
The benchmark tool in the Samba presentation also surprised me.
On Wed, Dec 28, 2011 at 6:08 PM, Chris Brody wrote:
> Hongli maybe this answer can help you. Yes libeio is using globals,
> libeio is USING LOCKS (mutex) and conditionals so AFAIK it would be
> safe to be using libeio from multiple event-loop threads.
>
> BTW libeio uses an interesting strategy to
On Thu, Dec 22, 2011 at 8:05 AM, Hongli Lai wrote:
> According to that same Wikipedia article, some CPUs have multiple
> register files in order to reduce thread switching time, i.e. what you
> describe as extra support for multiple contexts. According to the
> article, that idea came
On Thu, Dec 22, 2011 at 3:54 PM, Brandon Black wrote:
> Right, so either way an argument based on 2 threads per core is irrelevant,
> which is the argument you made in point (2) earlier. It doesn't make sense
> to argue about the benefits of threads under a layout that's know to be
> suboptimal i
faster thanks to MMU stuff and I'm asking for clarification.
Sent from my Android phone.
Op 22 dec. 2011 14:44 schreef "Brandon Black" het
volgende:
>
> On Thu, Dec 22, 2011 at 1:05 AM, Hongli Lai wrote:
>
>> 2. Suppose the system has two cores and N = 4, so two proce
On Thu, Dec 22, 2011 at 3:02 AM, Marc Lehmann wrote:
> With threads, you can avoid swapping some registers, most notably the MMU
> registers, which are often very costly to swap, to the extent that a number
> of cpus even have extra support for multiple "contexts".
>
>> Are you talking about hardw
On Wed, Dec 21, 2011 at 1:46 AM, Marc Lehmann wrote:
> Well, threads were originally invented because single cpus only had a single
> set of registers, and swapping these can be costly (especially with vm
> state).
I agree with your assertion that single CPUs had a single set of
registers and tha
On Wed, Dec 21, 2011 at 1:12 AM, Marc Lehmann wrote:
> libeio actually makes no assumptions about the existance of an event loop,
> or there being only one.
> ...
> well, I pointed out a way to you how to work with multiple event loops, so
> I am not sure why you write that: it is not true.
> ...
On Tue, Dec 20, 2011 at 5:06 PM, Marc Lehmann wrote:
> Threads were meant as an optimisation for single cpu systems though, and
> processes are meant for multiple cpus (or cores), and use the available
> hardware more efficiently.
I would like to know more about this claim. It's not that I don't
On Tue, Dec 20, 2011 at 4:17 PM, Marc Lehmann wrote:
> global variables are entirely fine with threads (libeio itself uses
> threads).
I know, but that's not what I mean. I'm talking about reentrancy.
Right now the libeio API assumes that there is one event loop. The
want_poll callback assumes th
On Tue, Dec 20, 2011 at 3:04 PM, Paddy Byers wrote:
> This has been asked for before and rejected.
>
> This is what I'm doing, which works well:
>
> http://lists.schmorp.de/pipermail/libev/2011q4/001584.html
(Replying to libev mailing list so that Marc can see my reasons)
I see. My case is a lit
I'm writing a multithreaded evented server in which I have N threads
(N=number of CPU cores) and one libev event loop per thread. I want to
use libeio but it looks like libeio depends on global variables so
this isn't going to work. I'd like to request the ability to use
libeio with multiple event
On Fri, Dec 9, 2011 at 12:48 PM, Marc Lehmann wrote:
> none of which seem to influence correctness in a bad way, unlike your
> proposed changes.
>
> you see, we have basically these choices:
>
> a) possibly break code in unexpected ways on some far away box, but have
> fewer warnings on an obsol
Here's a new patch should be less invasive than the last one. The new
approach is as follows:
- Introduced an ECB_REAL_GCC macro which tells us whether the GCC is
the real one or another implementation just claiming to be GCC.
llvm-gcc is not considered to be real.
- Introduced an ECB_APPLE_LLVM_GC
On Thu, Dec 8, 2011 at 11:26 PM, Brandon Black wrote:
> I doubt they're actually fully implemented in OSX's llvm-gcc 4.2.1. Do you
> have any supporting documentation or research on that?
All I know is that they compile and that they seem to have effect. Are
you saying that the compiler might ha
On Thu, Dec 8, 2011 at 9:54 PM, Marc Lehmann wrote:
> hmm, I am curious, what kind of warnings are these?
ecb.h:126: warning: ‘ecb_mf_lock’ defined but not used
ecb.h:252: warning: ‘ecb_ctz64’ defined but not used
ecb.h:285: warning: ‘ecb_ld64’ defined but not used
ecb.h:299: warning: ‘ecb_popcou
ecb.h currently generates tons of warnings on gcc-llvm 4.2.1 (OS X
10.6 with Xcode 4) because the ECB_GCC_VERSION macro blacklists llvm.
This causes ecb.h to think that __attribute__ and other gcc extension
keywords are not supported when they in fact are. The attached patch
fixes this problem and
On Thu, Aug 18, 2011 at 9:55 AM, Abdul Aziz wrote:
> I'm facing more issues installing passenger with nginx, related to libev it
> seems, as passenger tries to build libev on my system. When I do the
> command: passenger-install-nginx-module.
>
> ..
> config.status: executing depfiles comm
On Sun, Jun 5, 2011 at 3:40 PM, Aaron Boxer wrote:
> Hello!
>
> Is it possible to use libev to have non-blocking access
> to a blocking system call? In my server, I want to call posix_fadvise()
> on a file, and want to receive a callback when the call returns.
You need to run your system call in
On Wed, Dec 29, 2010 at 3:21 AM, Charles Kerr
wrote:
> Oh come on... "A is faster than B" is *clearly* not the same as "No
> practical difference between A and B." Saying they're equivalent just
> because they're both true doesn't even pass the laugh test. :)
I say there clearly is a difference.
On Tue, Dec 28, 2010 at 10:23 PM, Charles Kerr
wrote:
> In each statement: faster, faster, faster, faster. In one case, even
> "much faster." You never said anything like "faster, but not enough
> to make any practical difference." :)
I don't think he needs to. He just claims it's faster; whethe
Use ev::dynamic_loop or ev::default_loop.
On Thu, Dec 9, 2010 at 10:28 AM, Praveen Baratam
wrote:
> Hello All,
> I am an absolute new comer to Libev and trying to use its c++ binding inside
> my code.
> There are very examples related to C++ binding.
> All I can find was -
>
> class myclass
>
On Fri, Aug 13, 2010 at 4:17 PM, Alexey Khmara wrote:
> Hello.
> I'm trying to use libev in WebSockets server, and was very surprised when
> found that for every fd I need to create distinct watcher. Most of my fds
> share callback and event mask, and, I think, it's very common scenario -
> when y
Well I see that you're using the same event loop in all worker
threads. That's obviously not gonna work.
What you could do for example is to accept() connections from the main
thread, and for each accepted connection spawn a worker thread that
handles that connection only. Each worker thread must
According to the libev documentation, epoll has a higher overhead than
select and poll. Zed Shaw recently confirmed these findings for epoll
and poll. He states that poll tends to be faster than epoll as long as
the active/total fd ratio is higher than 0.6.
http://sheddingbikes.com/posts/1280829388
35 matches
Mail list logo