Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Jonathan McKeown
On Thursday 21 May 2009 23:37:20 Nate Eldredge wrote:
> Of course all these problems are solved, under any policy, by having more
> memory or swap.  But overcommit allows you to do more with less.

Or to put it another way, 90% of the problems that could be solved by having 
more memory can also be solved by pretending you have more memory and hoping 
no-one calls your bluff.

Jonathan
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Daniel O'Connor
On Fri, 22 May 2009, Yuri wrote:
> Nate Eldredge wrote:
> > Suppose we run this program on a machine with just over 1 GB of
> > memory. The fork() should give the child a private "copy" of the 1
> > GB buffer, by setting it to copy-on-write.  In principle, after the
> > fork(), the child might want to rewrite the buffer, which would
> > require an additional 1GB to be available for the child's copy.  So
> > under a conservative allocation policy, the kernel would have to
> > reserve that extra 1 GB at the time of the fork(). Since it can't
> > do that on our hypothetical 1+ GB machine, the fork() must fail,
> > and the program won't work.
>
> I don't have strong opinion for or against "memory overcommit". But I
> can imagine one could argue that fork with intent of exec is a faulty
> scenario that is a relict from the past. It can be replaced by some
> atomic method that would spawn the child without ovecommitting.

If all you are going to do is call execve() then use vfork().

That explicitly does not copy the parent's address space.

Also your example is odd, if you have a program using 1Gb (RAM + swap) 
and you want to start another (in any way) then that is going to be 
impossible.

If you had a 750Mb process that forked and the child only modified 250Mb 
you'd be all right because the other pages would be copies.

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: Installation from USB pen

2009-05-21 Thread Daniel O'Connor
On Fri, 22 May 2009, Randy Bush wrote:
> i succeeded with putting 8-current snap on a pen and booting.  but i
> can not figure out how to tell it to use the pen drive for system
> image loads.
>
> do i have to back off to 7 and then upgrade forward after install?

I don't believe you can install from UFS unless you mount it first and 
then tell it to do an FS install.

I have a 7.x based USB installer that is split in 2 - half FAT32 half 
UFS and it works.

Having half FAT32 is handy if you need to edit/add stuff from Windows. 
It does make it a PITA to build the install key though.

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Matthew Dillon
There is no such thing as a graceful way to deal with running out of
memory.  What is a program supposed to do?  Even if it gracefully exits
it still fails to perform the function for which it was designed.  If
such a program is used in a script then the script fails as well.   Even
the best systems (e.g. space shuttle, mars rover, airplane control
systems) which try to deal with unexpected situations still have to
have the final option, that being a complete reset.  And even a complete
reset is no guarantee of recovery (as one of the original airbus accidents
during an air-show revealed when the control systems got into a reset loop
and the pilot could not regain control of the plane).  The most robust
systems do things like multiple independant built-to-spec programs and
a voting system which require 10 times the man power to code and test,
something you will likely never see in the open-source world or even
in most of the commercial application world.

In fact, it is nearly impossible to write code which gracefully fails
even if the intent is to gracefully fail (and even if one can even figure
out what a workable graceful failure path would even be). You would
have to build code paths to deal with the failure conditions,
significantly increasing the size of the code base, and you would have
to test every possible failure combination to exercise those code
paths to make sure they actually work as expected.  If you don't then
the code paths designed to deal with the failure will themselves
likely be full of bugs and make the problem worse.  People who try
to program this way but don't have the massive resources required
often wind up with seriously bloated and buggy code.

So if the system runs out of memory (meaning physical memory + all
available swap), having a random subset of programs of any size
start to fail will rapidly result in a completely unusable system
and only a reboot will get it back into shape.  At least until it
runs out of memory again.

--

The best one can do is make the failures more deterministic.  Killing
the largest program is one such mechanic.  Knowing how the system will
react makes it easier to restore the system without necessarily rebooting
it.  Of course there might have to be exceptions as you don't want
your X server to be the program chosen.  Generally, though, having some
sort of deterministic progression is going to be far better then having
half a dozen random programs which happen to be trying to allocate memory
suddenly get an unexpected memory allocation failure.

Also, it's really a non-problem.  Simply configure a lot of swap... like
8G or 16G if you really care.  Or 100G.  Then you *DO* get a graceful
failure which gives you time to figure out what is going on and fix it.
The graceful failure is that the system starts to page to swap more and
more heavily, getting slower and slower in the process, but doesn't
actually have to kill anything for minutes to hours depending on the
failure condition.

It's a lot easier to write code which reacts to a system which is
operating at less then ideal efficiency then it is to write code which
reacts to the failure of a core function (that of allocating memory).
One could even monitor swap use as ring the alarm bells if it goes above
a certain point.

Overcommit has never been the problem.  The problem is there is no way
a large system can gracefully deal with running out of memory, overcommit
or not.

-Matt

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Thomas Hurst
* Nate Eldredge (neldre...@math.ucsd.edu) wrote:

> There may be a way to enable the conservative behavior; I know Linux
> has an option to do this, but am not sure about FreeBSD.

I seem to remember a patch to disable overcommit.  Here we go:

  http://people.freebsd.org/~kib/overcommit/

-- 
Thomas 'Freaky' Hurst
http://hur.st/
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Installation from USB pen

2009-05-21 Thread Randy Bush
>> i succeeded with putting 8-current snap on a pen and booting.  but i can
>> not figure out how to tell it to use the pen drive for system image
>> loads.
> What do you mean by system image loads? Does it load kernel succesfully
> but cannot find root filesystem?

sorry.  no.  it wants the cd or ftp or ... to get the install pieces.
as it is a snapshot, there are none on net (that i can find).  but they
went onto the usb.  but i can not figure out how to tell it to get them
from there.

randy
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: compiling system binutils as cross tools

2009-05-21 Thread Stanislav Sedov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 21 May 2009 17:44:42 +0100
xorquew...@googlemail.com mentioned:

> On 2009-05-21 16:10:18, Stanislav Sedov wrote:
> > You can also try using devel/cross-binutils to build cross-tools for
> > x86_64-freebsd. Random people reported they're working fine.
> 
> Unfortunately, as noted in this thread:
> 
>   http://marc.info/?l=freebsd-hackers&m=124146166902690&w=2
> 
> Using that port works but creates a compiler that emits code
> that can't be assembled by the default system binutils. Not
> great for a port...
> 

Why not make this compiler to use fresh binutils from cross-binutils
instead of using systems binutils? This will also allow to support
newer processor families and architectures. Is it possible to tell
GNAT where to look for binutils to assembly and link with?

- -- 
Stanislav Sedov
ST4096-RIPE
-BEGIN PGP SIGNATURE-

iEYEARECAAYFAkoV22IACgkQK/VZk+smlYGJSACghXD2H4iN9HE/DmNDKhdNVfMY
/SQAnjQ+HMeyMP9ZKJhF5F09Buex1tOz
=VB1I
-END PGP SIGNATURE-

!DSPAM:4a15db3c994295534499307!


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Installation from USB pen

2009-05-21 Thread Stanislav Sedov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 21 May 2009 14:12:00 -0700
Randy Bush  mentioned:

> i succeeded with putting 8-current snap on a pen and booting.  but i can
> not figure out how to tell it to use the pen drive for system image
> loads.
> 

What do you mean by system image loads? Does it load kernel succesfully
but cannot find root filesystem?

- -- 
Stanislav Sedov
ST4096-RIPE
-BEGIN PGP SIGNATURE-

iEYEARECAAYFAkoV2gEACgkQK/VZk+smlYHVqQCfb0lmeXbKdbk+Ktq1l2Dngz01
HEsAn1U5V1nnnyFs89Yvxo5xbjvIwzmY
=gp18
-END PGP SIGNATURE-

!DSPAM:4a15d9da994292682134302!


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: about building the crosstools

2009-05-21 Thread M. Warner Losh
In message: <4a15d288.3060...@telenix.org>
Chuck Robey  writes:
: I got instructions from Warner about how to build my crosstools (the FreeBSD
: ones) and after a minor startup contretemps, things began to work better.  My
: problem is that on doing the linking step, I'm getting a complaint that it 
can't
: figure out how to build the  /usr/cross/usr/lib/libc.a (/usr/cross being my
: toolls destdir).  I don't know how to fix this in the build, so I'd appreciate
: any hints.  I mean, it *seems* to me that these tools are meant to run on my
: current host (i386), not the target (arm) so it really should already know 
about
: my /usr/lib/libc.a (or shared version)) right?

You may have some contamination.  The xdev targets doesn't use
/usr/cross at all.  I'd blow that away entirely and try again.

Warner
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


about building the crosstools

2009-05-21 Thread Chuck Robey
I got instructions from Warner about how to build my crosstools (the FreeBSD
ones) and after a minor startup contretemps, things began to work better.  My
problem is that on doing the linking step, I'm getting a complaint that it can't
figure out how to build the  /usr/cross/usr/lib/libc.a (/usr/cross being my
toolls destdir).  I don't know how to fix this in the build, so I'd appreciate
any hints.  I mean, it *seems* to me that these tools are meant to run on my
current host (i386), not the target (arm) so it really should already know about
my /usr/lib/libc.a (or shared version)) right?
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: porting info for FreeBSD's kernel?

2009-05-21 Thread Chuck Robey
Alfred Perlstein wrote:
> * Chuck Robey  [090518 13:03] wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> I've been googling, trying to see if I can find notes regarding what needs
>> changing, in what order, to adapt the FreeBSD kernel to a new processor.  
>> Anyone
>> know where stuff like that can be found?
> 
> You need a cross compile toolchain of course, look into how FreeBSD
> is configured for the various arches.
> 
> Then I would suggest looking at the loaders, followed by 
> kern/init_main.c.  If you trace down init_main.c and some
> of the early sysinits that should give you an idea.
> 
> You might also be able to backtrack using CVS/svn to follow
> how mips or arm was done.
> 
> Note: freebsd has a decent cross-compile setup now, see
> "make universe" so things should be easier to get started.
> 

Thanks.  I will *definitely* read all the parts you hint me at, I won't be
deleting this mail, and I appreciate it.  I was asking on the llvm maillist
about Cortex-A8 support.  What I got says that it's not there yet, but it's
being worked upon, that and the -A9 support (definite differences).  So, any
crosstools needed today would have to be gcc, from a version at least as new as
the 4.3 branch (that's where they brought in the -A8 support).

The tool I got by doing the freeBSD crosstools was 4.2.1, which isn't going to
do it for the Cortex-A8, and I had someone else (from a FreeBSD list) tell me
that bringing in a newer version of gcc wasn't extremely likely, that they'd
want llvm instead.  I see 3 alternatives for a Cortex-A8 port: using a new gcc
port, waiting on the upgrade of llvm, or maybe deciding that the version the
llvm that's out now, with the v6 compatibility, would be (for the short term)
good enough.  Any idea which one to choose?  The only one that interests me is
for the TI OMAP 3530 (Cortex-A8, among other parts).  Maybe if the currently
available llvm is good enough, maybe gcc-4.2.1 may creak along well enough for
the short term?  I need to understand this.

My own personal Pandora won't probably won't arrive on my doorstep for maybe as
long as 3 more months, so in the meantime, I think I will be reading all I can
get my hands on regarding llvm.  Maybe I can really learn enough to make a
difference.  In school, I concentrated very definitely on OSes (I've written 3
of them over the years, of quite varying levels of performance), so for
compilers, I'm relying on my old 1988 version of the Aho/Sethi/Ullman compilers
book.  If anyone knows a more modern book that will show me enough about
compilers to be useful, I'd really appreciate the name, maybe Amazon will let me
get a cheap used version.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Nate Eldredge

On Thu, 21 May 2009, Yuri wrote:


Nate Eldredge wrote:
Suppose we run this program on a machine with just over 1 GB of memory. The 
fork() should give the child a private "copy" of the 1 GB buffer, by 
setting it to copy-on-write.  In principle, after the fork(), the child 
might want to rewrite the buffer, which would require an additional 1GB to 
be available for the child's copy.  So under a conservative allocation 
policy, the kernel would have to reserve that extra 1 GB at the time of the 
fork(). Since it can't do that on our hypothetical 1+ GB machine, the 
fork() must fail, and the program won't work.


I don't have strong opinion for or against "memory overcommit". But I can 
imagine one could argue that fork with intent of exec is a faulty scenario 
that is a relict from the past. It can be replaced by some atomic method that 
would spawn the child without ovecommitting.


I would say rather it's a centerpiece of Unix design, with an unfortunate 
consequence.  Actually, historically this would have been much more of a 
problem than at present, since early Unix systems had much less memory, no 
copy-on-write, and no virtual memory (this came in with BSD, it appears; 
it's before my time.)


The modern "atomic" method we have these days is posix_spawn, which has a 
pretty complicated interface if you want to use pipes or anything.  It 
exists mostly for the benefit of systems whose hardware is too primitive 
to be able to fork() in a reasonable manner.  The old way to avoid the 
problem of needing this extra memory temporarily was to use vfork(), 
but this has always been a hack with a number of problems.  IMHO neither 
of these is preferable in principle to fork/exec.


Note another good example is a large process that forks, but the child 
rather than exec'ing performs some simple task that writes to very little 
of its "copied" address space.  Apache does this, as Bernd mentioned. 
This also is greatly helped by having overcommit, but can't be 
circumvented by replacing fork() with something else.  If it really 
doesn't need to modify any of its shared address space, a thread can 
sometimes be used instead of a forked subprocess, but this has issues of 
its own.


Of course all these problems are solved, under any policy, by having more 
memory or swap.  But overcommit allows you to do more with less.


Are there any other than fork (and mmap/sbrk) situations that would 
overcommit?


Perhaps, but I can't think of good examples offhand.

--

Nate Eldredge
neldre...@math.ucsd.edu
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Installation from USB pen

2009-05-21 Thread Randy Bush
i succeeded with putting 8-current snap on a pen and booting.  but i can
not figure out how to tell it to use the pen drive for system image
loads.

do i have to back off to 7 and then upgrade forward after install?

rndy
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Bernd Walter
On Thu, May 21, 2009 at 10:52:26AM -0700, Yuri wrote:
> Nate Eldredge wrote:
> >Suppose we run this program on a machine with just over 1 GB of 
> >memory. The fork() should give the child a private "copy" of the 1 GB 
> >buffer, by setting it to copy-on-write.  In principle, after the 
> >fork(), the child might want to rewrite the buffer, which would 
> >require an additional 1GB to be available for the child's copy.  So 
> >under a conservative allocation policy, the kernel would have to 
> >reserve that extra 1 GB at the time of the fork(). Since it can't do 
> >that on our hypothetical 1+ GB machine, the fork() must fail, and the 
> >program won't work.
> 
> I don't have strong opinion for or against "memory overcommit". But I 
> can imagine one could argue that fork with intent of exec is a faulty 
> scenario that is a relict from the past. It can be replaced by some 
> atomic method that would spawn the child without ovecommitting.
> 
> Are there any other than fork (and mmap/sbrk) situations that would 
> overcommit?

If your system has enough virtual memory for working without overcommitment
it will run fine with overcommitment as well.
If you don't have enough memory it can do much more with overcommitment.
A simple apache process needing 1G and serving 1000 Clients would need
1TB swap without ever touching it.
Same for small embedded systems with limited swap.
So the requirement of overcomittment is not just a requirement of old days.
Overcomittment is even used more and more.
An example are snapshots, which are popular these days can lead to
space failure in case you rewrite a file with new data without growing
its length.
The old sparse file concept is also one of them, which can confuse
unaware software.
And then we have geom_virstore since a while.
Many modern databases do it as well.

-- 
B.Walter  http://www.bwct.de
Modbus/TCP Ethernet I/O Baugruppen, ARM basierte FreeBSD Rechner uvm.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Yuri

Nate Eldredge wrote:
Suppose we run this program on a machine with just over 1 GB of 
memory. The fork() should give the child a private "copy" of the 1 GB 
buffer, by setting it to copy-on-write.  In principle, after the 
fork(), the child might want to rewrite the buffer, which would 
require an additional 1GB to be available for the child's copy.  So 
under a conservative allocation policy, the kernel would have to 
reserve that extra 1 GB at the time of the fork(). Since it can't do 
that on our hypothetical 1+ GB machine, the fork() must fail, and the 
program won't work.


I don't have strong opinion for or against "memory overcommit". But I 
can imagine one could argue that fork with intent of exec is a faulty 
scenario that is a relict from the past. It can be replaced by some 
atomic method that would spawn the child without ovecommitting.


Are there any other than fork (and mmap/sbrk) situations that would 
overcommit?


Yuri

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: compiling system binutils as cross tools

2009-05-21 Thread Julian Elischer

Robert Watson wrote:


On Thu, 21 May 2009, xorquew...@googlemail.com wrote:

How do I compile the system binutils (contrib/binutils) as i386 -> 
x86_64 cross utils? That is, binutils that will run on an i386 host 
but will produce x86_64 binaries?


I'm trying to produce a bootstrapping compiler for a port and need to 
get these working. I've spent a while reading Makefiles but would 
rather get information from someone who actually knows rather than 
waste *another* week on this stuff.


I'd rather not compile the entire world if it can be avoided.


Not really my area, but if you haven't found "make toolchain" and "make 
buildenv" then you might want to take a look.  Typically these will be 
combined with TARGET_ARCH=foo, and in your case foo is 'amd64'.  The 
former builds the toolchain required for the architecture, and the 
latter creates a shell environment with paths appropriately munged and 
environments appropriately set to cross-compile using that chain.  
Normally the toolchain step is part of our integrated 
buildworld/buildkernel/etc process, but you can also use it for other 
things with buildenv.


I munged that once to create a nested jail/chroot set up so that 
default toolchain was the cross set. so if you did 'cc foo.c'

you got a cross binary..

if you needed a native cc you did it in the outside chroot.
worked like a charm.
from the outside, you just did 'chroot cross cc foo.c' to get  cross
binary.



Robert N M Watson
Computer Laboratory
University of Cambridge
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: compiling system binutils as cross tools

2009-05-21 Thread xorquewasp
On 2009-05-21 16:10:18, Stanislav Sedov wrote:
> You can also try using devel/cross-binutils to build cross-tools for
> x86_64-freebsd. Random people reported they're working fine.

Unfortunately, as noted in this thread:

  http://marc.info/?l=freebsd-hackers&m=124146166902690&w=2

Using that port works but creates a compiler that emits code
that can't be assembled by the default system binutils. Not
great for a port...

xw
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Nate Eldredge

On Thu, 21 May 2009, per...@pluto.rain.com wrote:


Nate Eldredge  wrote:

With overcommit, we pretend to give the child a writable private
copy of the buffer, in hopes that it won't actually use more of it
than we can fulfill with physical memory.


I am about 99% sure that the issue involves virtual memory, not
physical, at least in the fork/exec case.  The incidence of such
events under any particular system load scenario can be reduced or
eliminated simply by adding swap space.


True.  When I said "a system with 1GB of memory", I should have said "a 
system with 1 GB of physical memory + swap".


--

Nate Eldredge
neldre...@math.ucsd.edu
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: compiling system binutils as cross tools

2009-05-21 Thread Stanislav Sedov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 21 May 2009 10:53:05 +0100
xorquew...@googlemail.com mentioned:

> Hi.
> 
> How do I compile the system binutils (contrib/binutils) as i386 ->
> x86_64 cross utils? That is, binutils that will run on an i386 host but
> will produce x86_64 binaries?
> 
> I'm trying to produce a bootstrapping compiler for a port and need to
> get these working. I've spent a while reading Makefiles but would rather
> get information from someone who actually knows rather than waste
> *another* week on this stuff.
> 
> I'd rather not compile the entire world if it can be avoided.
> 

You can also try using devel/cross-binutils to build cross-tools for
x86_64-freebsd. Random people reported they're working fine.

- -- 
Stanislav Sedov
ST4096-RIPE
-BEGIN PGP SIGNATURE-

iEYEARECAAYFAkoVRK4ACgkQK/VZk+smlYGbjwCff1f6hJ+PAE4OSqPV7IIQi8kY
8iwAn2CcQ3H9D5Q+mZdern+11PgRGapq
=Amr/
-END PGP SIGNATURE-

!DSPAM:4a1544b0994291380925937!


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: compiling system binutils as cross tools

2009-05-21 Thread xorquewasp
On 2009-05-21 11:20:21, Robert Watson wrote:
> Not really my area, but if you haven't found "make toolchain" and "make  
> buildenv" then you might want to take a look.  Typically these will be  
> combined with TARGET_ARCH=foo, and in your case foo is 'amd64'.  The 
> former builds the toolchain required for the architecture, and the latter 
> creates a shell environment with paths appropriately munged and 
> environments appropriately set to cross-compile using that chain.  
> Normally the toolchain step is part of our integrated 
> buildworld/buildkernel/etc process, but you can also use it for other 
> things with buildenv.

Thanks, 'make toolchain' looks like it'll work. 'make buildenv' gave me
the paths to the binaries I needed to tell the compiler I'm working on
to use those for cross compilation.

What tangled webs we weave...

cheers,
xw
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: compiling system binutils as cross tools

2009-05-21 Thread Robert Watson


On Thu, 21 May 2009, xorquew...@googlemail.com wrote:

How do I compile the system binutils (contrib/binutils) as i386 -> x86_64 
cross utils? That is, binutils that will run on an i386 host but will 
produce x86_64 binaries?


I'm trying to produce a bootstrapping compiler for a port and need to get 
these working. I've spent a while reading Makefiles but would rather get 
information from someone who actually knows rather than waste *another* week 
on this stuff.


I'd rather not compile the entire world if it can be avoided.


Not really my area, but if you haven't found "make toolchain" and "make 
buildenv" then you might want to take a look.  Typically these will be 
combined with TARGET_ARCH=foo, and in your case foo is 'amd64'.  The former 
builds the toolchain required for the architecture, and the latter creates a 
shell environment with paths appropriately munged and environments 
appropriately set to cross-compile using that chain.  Normally the toolchain 
step is part of our integrated buildworld/buildkernel/etc process, but you can 
also use it for other things with buildenv.


Robert N M Watson
Computer Laboratory
University of Cambridge
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


compiling system binutils as cross tools

2009-05-21 Thread xorquewasp
Hi.

How do I compile the system binutils (contrib/binutils) as i386 ->
x86_64 cross utils? That is, binutils that will run on an i386 host but
will produce x86_64 binaries?

I'm trying to produce a bootstrapping compiler for a port and need to
get these working. I've spent a while reading Makefiles but would rather
get information from someone who actually knows rather than waste
*another* week on this stuff.

I'd rather not compile the entire world if it can be avoided.

cheers,
xw
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread perryh
Nate Eldredge  wrote:
> For instance, consider the following program.

> this happens most of the time with fork() ...

It may be worthwhile to point out that one extremely common case is
the shell itself.  Even /bin/sh is large; csh (the default FreeBSD
shell) is quite a bit larger and bash larger yet.  The case of "big
program forks, and the child process execs a small program" arises
almost every time a shell command (other than a built-in) is executed.

> With overcommit, we pretend to give the child a writable private
> copy of the buffer, in hopes that it won't actually use more of it
> than we can fulfill with physical memory.

I am about 99% sure that the issue involves virtual memory, not
physical, at least in the fork/exec case.  The incidence of such
events under any particular system load scenario can be reduced or
eliminated simply by adding swap space.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Ilya Orehov
+--- Yuri, 2009-05-20 ---
| Seems like failing system calls (mmap and sbrk) that allocate memory is more
| graceful and would allow the program to at least issue the reasonable 
| error message.
| And more intelligent programs would be able to reduce used memory 
| instead of just dying.

Hi!

You can set memory limit to achieve your goal:

tcsh% limit vmemoryuse 20M

In this case, malloc(10) will return 0.

Ilya.

| 
| Yuri
| 
| ___
| freebsd-hackers@freebsd.org mailing list
| http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
| To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
| 
+-
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Rayson Ho
Because the kernel is lazy!!

You can google for "lazy algorithm", or find an OS internals book and
read about the advantages of doing it this way...

Rayson



On Thu, May 21, 2009 at 1:32 AM, Yuri  wrote:
> Seems like failing system calls (mmap and sbrk) that allocate memory is more
> graceful and would allow the program to at least issue the reasonable error
> message.
> And more intelligent programs would be able to reduce used memory instead of
> just dying.
>
> Yuri
>
> ___
> freebsd-hackers@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
>
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

2009-05-21 Thread Nate Eldredge

On Wed, 20 May 2009, Yuri wrote:


Seems like failing system calls (mmap and sbrk) that allocate memory is more
graceful and would allow the program to at least issue the reasonable error 
message.
And more intelligent programs would be able to reduce used memory instead of 
just dying.


It's a feature, called "memory overcommit".  It has a variety of pros and 
cons, and is somewhat controversial.  One advantage is that programs often 
allocate memory (in various ways) that they will never use, which under a 
conservative policy would result in that memory being wasted, or programs 
failing unnecessarily.  With overcommit, you sometimes allocate more 
memory than you have, on the assumption that some of it will not actually 
be needed.


Although memory allocated by mmap and sbrk usually does get used in fairly 
short order, there are other ways of allocating memory that are easy to 
overlook, and which may "allocate" memory that you don't actually intend 
to use.  Probably the best example is fork().


For instance, consider the following program.

#define SIZE 10 /* 1 GB */
int main(void) {
  char *buf = malloc(SIZE); /* 1 GB */
  memset(buf, 'x', SIZE); /* touch the buffer */
  pid_t pid = fork();
  if (pid == 0) {
execlp("true", "true", (char *)NULL);
perror("true");
_exit(1);
  } else if (pid > 0) {
for (;;); /* do work */
  } else {
perror("fork");
exit(1);
  }
  return 0;
}

Suppose we run this program on a machine with just over 1 GB of memory. 
The fork() should give the child a private "copy" of the 1 GB buffer, by 
setting it to copy-on-write.  In principle, after the fork(), the child 
might want to rewrite the buffer, which would require an additional 1GB to 
be available for the child's copy.  So under a conservative allocation 
policy, the kernel would have to reserve that extra 1 GB at the time of 
the fork(). Since it can't do that on our hypothetical 1+ GB machine, the 
fork() must fail, and the program won't work.


However, in fact that memory is not going to be used, because the child is 
going to exec() right away, which will free the child's "copy".  Indeed, 
this happens most of the time with fork() (but of course the kernel can't 
know when it will or won't.)  With overcommit, we pretend to give the 
child a writable private copy of the buffer, in hopes that it won't 
actually use more of it than we can fulfill with physical memory.  If it 
doesn't use it, all is well; if it does use it, then disaster occurs and 
we have to start killing things.


So the advantage is you can run programs like the one above on machines 
that technically don't have enough memory to do so.  The disadvantage, of 
course, is that if someone calls the bluff, then we kill random processes. 
However, this is not all that much worse than failing allocations: 
although programs can in theory handle failed allocations and respond 
accordingly, in practice they don't do so and just quit anyway.  So in 
real life, both cases result in disaster when memory "runs out"; with 
overcommit, the disaster is a little less predictable but happens much 
less often.


If you google for "memory overcommit" you will see lots of opinions and 
debate about this feature on various operating systems.


There may be a way to enable the conservative behavior; I know Linux has 
an option to do this, but am not sure about FreeBSD.  This might be useful 
if you are paranoid, or run programs that you know will gracefully handle 
running out of memory.  IMHO for general use it is better to have 
overcommit, but I know there are those who disagree.


--

Nate Eldredge
neldre...@math.ucsd.edu
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"