On Thu, Mar 29, 2012 at 12:53 PM, Claudio Jeker
<cje...@diehard.n-r-g.com> wrote:
> On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:
>> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd <we...@weirdnet.nl> wrote:
>> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
>> > | > Instead, you'll crank your file limits to... let me guess, unlimited?
>> > | >
>> > | > And when you hit the system-wide limit, then what happens?
>> > | >
>> > | > Then it is our systems problem, isn't it.
>> > | >
>> > |
>> > | i am not sure if you're a suggesting that each program do getrlimit
>> > | and acquire resources based on that, because it's a pita
>> >
>> > Gee whiz, writing programs is hard! B Let's go shopping!
>> >
>> > | what they could do is offer a reliable estimate (e.g. 5 open files per
>> > | tab required)
>> >
>> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
>> > any) *DEAL WITH IT*
>>
>> but we're only talking about one resource and one error condition
>
> OMG. System calls can fail. I'm shocked. How can anything work?!
>
>> write wrappers for open, malloc, etc
>
> Why wrappers? Just check the freaking return value and design your program
> to behave in case something goes wrong.

guess what, if you do this more than once in your program you have a
wrapper candidate

>
>> avoiding errors regarding stack limits is not as easy
>
> Yes, so embrace them, design with failure in mind.
>
>> obviously there's no reason for: a. every application replicating
>> these wrappers (how many xmallocs have you seen, honest?) and b. the
>> system not providing a consistent api
>
> xmalloc is a dumb interface, since it terminates the process as soon as
> the first malloc fails. Sure it is the right thing for process with
> limited memory needs but browsers are such pigs today that you should be
> better then just showing a "Oups, something went wrong" page on next
> startup.
>
>> after you're done writing all the wrappers for your crappy browser,
>> what do you do? notify the user that no resources can be allocated,
>> try pushing the soft limit first, whatever. they still have to re-exec
>> with higher limits
>
> Maybe you could also close some of those 999 keep-alive sessions and
> pre-load sessions you have open and retry. Seriously why does a
> webbrowser need 1024 file descriptors to be open at the same time?
> Are you concurrently reading 500 homepages?

you are not expected to read 500 homepages at the same time, but you
*are* expected to switch to any tab at any time, and the price of a
system call to reopen the pertaining file descriptors is unacceptable

>
>> why even bother?
>
> because the modern browser suck. They suck big time. They assume complete
> ownership of the system and think that consuming all resources just to
> show the latest animated gif from 4chan is the right thing.
>
>>
>> >
>> >
>> > Note that on a busy system, the ulimit is not the only thing holding
>> > you back. B You may actually run into the maximum number of files the
>> > system can have open at any given time (sure, that's also tweakable).
>> > Just doing getrlimit isn't going to be sufficient...
>>
>> doesn't matter
>
> your attitude is the reason why we need multi-core laptops with 8GB of ram
> to play one game of tic-tac-toe.

until now it's been about the interface. glad that someone decided to
be honest by saying they have bias towards the default low limits (and
fitting oses in floppy disks, etc)

:)

>
> --
> :wq Claudio

Reply via email to