> Caching is definitely worth doing but you don't always have
> the opportunity to do it. If you are copying a lot of files
> across, it would help quite a bit if you can just pipeline
> requests (or send fewer bundled requests). If you are copying
> very large files, streaming would help. When co
"Devon H. O'Dell" writes:
> determine where a node is placed is *not* cheap. In the end, an
> optimization that slows things down is not an optimization at all. You
There are many different kinds of optimization one can perform. One may
optimize compiled code for size, speed, simplicity, or rel
On Sat, 19 Feb 2011 16:15:47 EST erik quanstrom wrote:
> > > what is the goal?
> >
> > Better handling of latency at a minimum? If I were to do
> > this I would experiment with extending the channel concept.
>
> hmm. let me try again ... do you have a concrete goal?
> it's hard to know why a
On Sat Feb 19 15:10:58 EST 2011, bakul+pl...@bitblocks.com wrote:
> On Sat, 19 Feb 2011 10:09:08 EST erik quanstrom
> wrote:
> > > It is inherent to 9p (and RPC).
> >
> > please defend this. i don't see any evidence for this bald claim.
>
> We went over latency issues multiple times in the pa
On Sat, 19 Feb 2011 10:09:08 EST erik quanstrom wrote:
> > It is inherent to 9p (and RPC).
>
> please defend this. i don't see any evidence for this bald claim.
We went over latency issues multiple times in the past but
let us take your 80ms latency. You can get 12.5 rpc calls
through in 1 sec
On Saturday 19 of February 2011 11:34:19 Steve Simon wrote:
> > Benchmark utilities to measure the overhead of syscalls. It's cheating
> > to do for getpid, but for other things like gettimeofday, it's
> > *extremely* nice. Linux's gettimeofday(2) beats the socks off of the
> > rest of the time imp
it seems to me that trying Op (Octopus) on Plan 9 would be a logical first step.
On Fri, Feb 18, 2011 at 2:21 PM, Bakul Shah wrote:
> On Fri, 18 Feb 2011 13:06:43 PST John Floren wrote:
>> On Fri, Feb 18, 2011 at 12:15 PM, erik quanstrom wr=
>> ote:
>> >> > i don't think that it makes sense to
> So why does replica use 9P? Because it's *The Plan 9 Protocol*. If
> *The Plan 9 Protocol* turns out to not serve our needs, we need to
> figure out why.
i appreciate the sentiment, but i think that's just taking it a wee bit
overboard. we don't pretend that 9p replaces http, ftp, smtp, etc.
ve
> The point I was trying to make (but clearly not clearly) was
> that simplicity and performance are often at cross purposes
> and a simple solution is not always "good enough". RPC
> (which is what 9p is) is simpler and perfectly fine when
> latencies are small but not when there is a lot of late
> Benchmark utilities to measure the overhead of syscalls. It's cheating
> to do for getpid, but for other things like gettimeofday, it's
> *extremely* nice. Linux's gettimeofday(2) beats the socks off of the
> rest of the time implementations. About the only faster thing is to
> get CPU speed and
> So why does replica use 9P? Because it's *The Plan 9 Protocol*. If
> *The Plan 9 Protocol* turns out to not serve our needs, we need to
> figure out why.
I really don't get this, what is the problem with replica's speed?
I run replica once every week or two and it typically runs for about
30 sec
afaik, templates might be inlined, static or shared... depending on
the compiler and the flags.
for gcc see:
http://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html
On Fri, Feb 18, 2011 at 4:35 PM, David Leimbach wrote:
>
>
> Sent from my iPhone
>
> On Feb 18, 2011, at 11:15 AM, Bakul Sha
On Fri, 18 Feb 2011 13:06:43 PST John Floren wrote:
> On Fri, Feb 18, 2011 at 12:15 PM, erik quanstrom wr=
> ote:
> >> > i don't think that it makes sense to say that since replica
> >> > is slow and hg/rsync are fast, it follows that 9p is slow.
> >>
> >> It is the other way around. 9p can't ha
On Fri, Feb 18, 2011 at 12:15 PM, erik quanstrom wrote:
>> > i don't think that it makes sense to say that since replica
>> > is slow and hg/rsync are fast, it follows that 9p is slow.
>>
>> It is the other way around. 9p can't handle latency so on
>> high latency pipes programs using 9p won't be
On Fri, Feb 18, 2011 at 12:10 PM, Bakul Shah wrote:
> Templates encourage inlining. There is at least one template
> libraries where the bulk of code is implemented in separate
> .cc files (using void* tricks), used by some embedded
> products. But IIRC the original STL from sgi was all in .h
> f
> > i don't think that it makes sense to say that since replica
> > is slow and hg/rsync are fast, it follows that 9p is slow.
>
> It is the other way around. 9p can't handle latency so on
> high latency pipes programs using 9p won't be as fast as
> programs using streaming (instead of rpc). Grant
On Fri, 18 Feb 2011 11:35:18 PST David Leimbach wrote:
> >> C++ inlines a lot because microbenchmarks improve, but inline every
> >> modest function in a big program and you make the binary much bigger
> >> and blow the i-cache.
> >
> > That's a compiler fault. Surely modern compilers need to be
2011/2/18 erik quanstrom :
>> DKIM), etc., it's just not really feasible on commodity hardware. (Of
>> course, these days, operating systems and RAID controllers with
>> battery-backed caches make it impossible to guarantee that your
>> message ever ends up in persistent storage, but that's still a
On Fri, 18 Feb 2011 14:26:32 EST erik quanstrom wrote:
> > On a slightly different tangent, 9p is simple but it doesn't
> > handle latency very well. To make efficient use of long fat
> > pipes you need more complex mechanisms -- there is no getting
> > around that fact. rsync & hg in spite of
Sent from my iPhone
On Feb 18, 2011, at 11:15 AM, Bakul Shah wrote:
> On Fri, 18 Feb 2011 10:46:51 PST Rob Pike wrote:
>> The more you optimize, the better the odds you slow your program down.
>> Optimization adds instructions and often data, in one of the
>> paradoxes of engineering. In ti
> DKIM), etc., it's just not really feasible on commodity hardware. (Of
> course, these days, operating systems and RAID controllers with
> battery-backed caches make it impossible to guarantee that your
> message ever ends up in persistent storage, but that's still a small
bb cache is persistent
2011/2/18 ron minnich :
> On Fri, Feb 18, 2011 at 9:32 AM, erik quanstrom wrote:
>
>> wire speed is generally considered "good enough". ☺
Touche.
> depends on field of use. In my biz everyone hits wire speed, and the
> question from there is: how much of the CPU are you eating to get that
> wir
> On a slightly different tangent, 9p is simple but it doesn't
> handle latency very well. To make efficient use of long fat
> pipes you need more complex mechanisms -- there is no getting
> around that fact. rsync & hg in spite of their complexity
> beat the pants off replica. Their cache behavi
2011/2/18 Rob Pike :
> The more you optimize, the better the odds you slow your program down.
> Optimization adds instructions and often data, in one of the
> paradoxes of engineering. In time, then, what you gain by
> "optimizing" increases cache pressure and slows the whole thing down.
>
> C++
On Fri, 18 Feb 2011 10:46:51 PST Rob Pike wrote:
> The more you optimize, the better the odds you slow your program down.
> Optimization adds instructions and often data, in one of the
> paradoxes of engineering. In time, then, what you gain by
> "optimizing" increases cache pressure and slows
The more you optimize, the better the odds you slow your program down.
Optimization adds instructions and often data, in one of the
paradoxes of engineering. In time, then, what you gain by
"optimizing" increases cache pressure and slows the whole thing down.
C++ inlines a lot because microbench
> i take a different view of performance.
>
> performance is like scotch. you always want better scotch,
> but you only upgrade if the stuff you're drinking is a problem.
>
> - erik
Awesome. That quote is going on my office door below the Tanenbaum
quote on bandwidth and station wagons!
On Fri, Feb 18, 2011 at 9:21 AM, erik quanstrom wrote:
> linux optimization is a ratrace. you are only judged on
> the immediate effect on your subsystem, not the system
> as a whole. so unless you play the game, your system will
> appear to regress over time as other optimizers take resources
>
On Fri, Feb 18, 2011 at 9:32 AM, erik quanstrom wrote:
> wire speed is generally considered "good enough". ☺
depends on field of use. In my biz everyone hits wire speed, and the
question from there is: how much of the CPU are you eating to get that
wire speed.
It's a very tangled thicked.
ron
> I'd be surprised if things were dissimilar for you at Coraid -- and I
> certainly *am not* implying that you guys have poor performance. I'm
> just saying if you went to your customers and asked, "Given the choice
> between something that is the same as what you have now, and something
> that's f
> > i wonder if that is uniformly faster. consider that
> > making reads of that page coherent enough on a
> > big multiprocessor and making sure there's not too
> > much interprocesser skew might be slower than a
> > system call.
>
> Real world tests show that it is consistently faster. It's pro
On Fri, Feb 18, 2011 at 12:07 PM, erik quanstrom wrote:
>> The high level overview is that it is stored in a shared page, mapped
>> into each new process's memory space at start-up. The kernel is never
>> entered; there are no context switches. The kernel has a timer that
>> updates this page atom
> The kernel has a timer that
> updates this page atomically.
which timer updates the page even when nobody is interested in knowing
what the time is, increasing the noise in the system[1]. i still keep
graphs of a full-blown plan9 cpu server with users logged in and close
to 200 running processes
2011/2/18 andrey mirtchovski :
>> I think it's time that we do some real-world style benchmarks on
>> multiple systems for Plan 9 versus other systems. I'd be interested in
>
> Ron did work measuring syscall costs and latencies in plan9.
I would love to duplicate that across multiple systems doing
2011/2/18 erik quanstrom :
>> The high level overview is that it is stored in a shared page, mapped
>> into each new process's memory space at start-up. The kernel is never
>> entered; there are no context switches. The kernel has a timer that
>> updates this page atomically.
>
> i wonder if that i
2011/2/18 erik quanstrom :
>> Arguing that performance is unimportant is counterintuitive. It
>> certainly is. Arguing that it is unimportant if it causes unnecessary
>> complexity has merit. Defining when things become "unnecessarily
>> complex" is important to the argument. Applications with time
> The high level overview is that it is stored in a shared page, mapped
> into each new process's memory space at start-up. The kernel is never
> entered; there are no context switches. The kernel has a timer that
> updates this page atomically.
i wonder if that is uniformly faster. consider that
> I think it's time that we do some real-world style benchmarks on
> multiple systems for Plan 9 versus other systems. I'd be interested in
Ron did work measuring syscall costs and latencies in plan9.
2011/2/18 dexen deVries :
> On Friday, February 18, 2011 04:15:10 pm you wrote:
>> Benchmark utilities to measure the overhead of syscalls. It's cheating
>> to do for getpid, but for other things like gettimeofday, it's
>> *extremely* nice. Linux's gettimeofday(2) beats the socks off of the
>> rest
On Friday, February 18, 2011 04:15:10 pm you wrote:
> Benchmark utilities to measure the overhead of syscalls. It's cheating
> to do for getpid, but for other things like gettimeofday, it's
> *extremely* nice. Linux's gettimeofday(2) beats the socks off of the
> rest of the time implementations. Ab
> Arguing that performance is unimportant is counterintuitive. It
> certainly is. Arguing that it is unimportant if it causes unnecessary
> complexity has merit. Defining when things become "unnecessarily
> complex" is important to the argument. Applications with timers (or
> doing lots of logging)
2011/2/18 erik quanstrom :
>> I know we're fond of bashing people who need to eek performance out of
>> systems, and a lot of time it's all in good fun. There's little
>> justification for getpid, but getpid isn't the only implementor of
>> this functionality. For other interfaces, it definitely ma
> I know we're fond of bashing people who need to eek performance out of
> systems, and a lot of time it's all in good fun. There's little
> justification for getpid, but getpid isn't the only implementor of
> this functionality. For other interfaces, it definitely makes sense to
> speed up the sys
2011/2/18 dexen deVries :
> On Friday, February 18, 2011 02:29:54 pm erik quanstrom wrote:
>> so this is a complete waste of time if forks > getpids.
>> and THREAD_GETMEM must allocate memory. so
>> the first call isn't exactly cheep. aren't they optimizing
>> for bad programming?
>>
>> not only
Sent from my iPhone
On Feb 18, 2011, at 5:45 AM, dexen deVries wrote:
> On Friday, February 18, 2011 02:29:54 pm erik quanstrom wrote:
>> so this is a complete waste of time if forks > getpids.
>> and THREAD_GETMEM must allocate memory. so
>> the first call isn't exactly cheep. aren't they o
On Friday, February 18, 2011 02:29:54 pm erik quanstrom wrote:
> so this is a complete waste of time if forks > getpids.
> and THREAD_GETMEM must allocate memory. so
> the first call isn't exactly cheep. aren't they optimizing
> for bad programming?
>
> not only that, ... from getpid(2)
>
> NOT
so this is a complete waste of time if forks > getpids.
and THREAD_GETMEM must allocate memory. so
the first call isn't exactly cheep. aren't they optimizing
for bad programming?
not only that, ... from getpid(2)
NOTES
Since glibc version 2.3.4, the glibc wrapper function for getpid
Best recent c99 example:
int foo[] = {
[0] = 1,
[1] = 2,
[2] = 4,
[3] = 8,
[4] = 16,
[5] = 32
};
I shudder to think about foo[6].
Paul
On Thursday, February 17, 2011, ron minnich wrote:
> I was looking at another fine example of modern programming from glibc
> and just had to share
I was looking at another fine example of modern programming from glibc
and just had to share it.
Where does the getpid happen? It's anyone's guess. This is just so
readable too ... I'm glad they want to such effort to optimize getpid.
ron
#ifndef NOT_IN_libc
static inline __attribute__((always_i
> > > Or something equivalent. Example: How do you know moving an
> > > expression out of a for loop is valid? The optimizer needs to
> > > understand the control flow.
> >
> > is this still a useful thing to be doing?
>
> Yes.
what's your argument?
my argument is that the cpu is so fast relati
On Thu, 03 Feb 2011 15:33:57 EST erik quanstrom wrote:
> > I must also say llvm has a lot of functionality. But even so
> > there is a lot of bloat. Let me just say the bloat is due to
> > many factors but it has far *less* to do with graphs.
> > Download llvm and take a peek. I think the chose
> $ size /usr/local/bin/clang
> text data bss dec hex filename
> 22842862 1023204 69200 23935266 16d3922 /usr/local/bin/clang
"It is interesting to note the 5 minutes reduction in system time. I
assume that this is in part because of the builtin assembler."
-- http
On Thu, 3 Feb 2011 21:32:24 +, Steve Simon wrote:
> I don't know if f2c meets your needs, but it has always worked.
As compared to modern fortran compilers, it is basically a toy.
But he did say some of his source is in ratfor,
I am pretty sure f2c would be happy with ratfor's output.
y
> > I don't know if f2c meets your needs, but it has always worked.
>
>
> As compared to modern fortran compilers, it is basically a toy.
>
But he did say some of his source is in ratfor,
I am pretty sure f2c would be happy with ratfor's output.
years ago I supported the pafec FE package - ten
On Thu, Feb 3, 2011 at 12:49 PM, Federico G. Benavento
wrote:
> I don't know if f2c meets your needs, but it has always worked.
As compared to modern fortran compilers, it is basically a toy.
ron
I don't know if f2c meets your needs, but it has always worked.
On Thu, Feb 3, 2011 at 9:07 AM, EBo wrote:
> On Thu, 3 Feb 2011 10:38:30 +, C H Forsyth wrote:
>>
>> it's not just the FORTRAN but supporting libraries, sometimes large ones,
>> including ones in C++, are often required as well.
> I must also say llvm has a lot of functionality. But even so
> there is a lot of bloat. Let me just say the bloat is due to
> many factors but it has far *less* to do with graphs.
> Download llvm and take a peek. I think the chosen language
> and the habits it promotes and the "impedance match"
On Thu, 03 Feb 2011 13:54:05 EST erik quanstrom wrote:
> On Thu Feb 3 13:33:52 EST 2011, bakul+pl...@bitblocks.com wrote:
> > On Thu, 03 Feb 2011 13:11:07 EST erik quanstrom wr
> ote:
> > > > I agree with their goal but not its execution. I think a
> > > > toolkit for manipulating graph based
On Thu Feb 3 13:33:52 EST 2011, bakul+pl...@bitblocks.com wrote:
> On Thu, 03 Feb 2011 13:11:07 EST erik quanstrom
> wrote:
> > > I agree with their goal but not its execution. I think a
> > > toolkit for manipulating graph based program representations
> > > to build optimizing compilers is a
On Thu, Feb 3, 2011 at 10:21 AM, wrote:
> EBo writes:
>
>> Ah. Thanks for the info. I asked because some of the physicists and
>> atmospheric scientists I work with are likely to insist on having
>> FORTRAN. I still have not figured how I will deal with that if at
>> all.
>
> I thought those f
On Thu, 03 Feb 2011 13:11:07 EST erik quanstrom wrote:
> > I agree with their goal but not its execution. I think a
> > toolkit for manipulating graph based program representations
> > to build optimizing compilers is a great idea but did they
> > do it in C++?
>
> are you sure that the problem
Consider what `stalin' does in about 3300 lines of Scheme
> code. It translates R4RS scheme to C and takes a lot of time
> doing so but the code is generates is blazingly fast. The
> kind of globally optimized C code you or I wouldn't have the
> patience to write. Or the ability to keep all that co
EBo writes:
> Ah. Thanks for the info. I asked because some of the physicists and
> atmospheric scientists I work with are likely to insist on having
> FORTRAN. I still have not figured how I will deal with that if at
> all.
I thought those folks used languages like Matlab & Mathematica for
an
> I agree with their goal but not its execution. I think a
> toolkit for manipulating graph based program representations
> to build optimizing compilers is a great idea but did they
> do it in C++?
are you sure that the problem isn't the graph representation?
gcc also takes a graph-based approac
On Thu, 03 Feb 2011 07:08:57 PST David Leimbach wrote:
> On Wednesday, February 2, 2011, erik quanstrom wrote:
> >> It is a C/C++/Obj-C compiler & does static analysis, has
> >> backends for multiple processor types as well as C as a
> >> target, a lot of optimization tricks etc. See llvm.org. B
To be fair, gcc, g++ and gobjc combined are actually bigger than clang+llvm.
At least on my system. So it could have been worse.
2011/2/3 David Leimbach
> On Wednesday, February 2, 2011, erik quanstrom
> wrote:
> >> It is a C/C++/Obj-C compiler & does static analysis, has
> >> backends for mult
On Wednesday, February 2, 2011, erik quanstrom wrote:
>> It is a C/C++/Obj-C compiler & does static analysis, has
>> backends for multiple processor types as well as C as a
>> target, a lot of optimization tricks etc. See llvm.org. But
>> frankly, I think they have lost the plot. C is basically
On Thu, 3 Feb 2011 10:38:30 +, C H Forsyth wrote:
it's not just the FORTRAN but supporting libraries, sometimes large
ones,
including ones in C++, are often required as well. i'd concluded that
cross-compilation was currently the only effective route.
i hadn't investigated whether something
> Ah. Thanks for the info. I asked because some of the physicists and
> atmospheric scientists I work with are likely to insist on having
> FORTRAN. I still have not figured how I will deal with that if at all.
it's not just the FORTRAN but supporting libraries, sometimes large ones,
including
On Thu, Feb 03, 2011 at 03:47:17AM -0600, EBo wrote:
>
> Ah. Thanks for the info. I asked because some of the physicists and
> atmospheric scientists I work with are likely to insist on having
> FORTRAN. I still have not figured how I will deal with that if at
> all.
>
If the cost can be met, p
On Thu, 3 Feb 2011 09:46:00 +, Charles Forsyth wrote:
FORTRAN H Enhanced was an early optimising compiler.
FORTRAN H for System/360, then FORTRAN H Extended for System/370;
FORTRAN H Enhanced added further insight to get better code.
Ah. Thanks for the info. I asked because some of the ph
FORTRAN H Enhanced was an early optimising compiler.
FORTRAN H for System/360, then FORTRAN H Extended for System/370;
FORTRAN H Enhanced added further insight to get better code.
On Thu, 3 Feb 2011 08:35:53 +, Charles Forsyth wrote:
It is a C/C++/Obj-C compiler & does static analysis, has
backends for multiple processor types as well as C as a
target, a lot of optimization tricks etc.
... FORTRAN H Enhanced did so much with so little! ...
Is there a compiler that
>It is a C/C++/Obj-C compiler & does static analysis, has
>backends for multiple processor types as well as C as a
>target, a lot of optimization tricks etc.
22mbytes is still a lot of "etc.". i've no objection
to optimisations big and small, but that still wouldn't explain
the size (to me). FORT
> It is a C/C++/Obj-C compiler & does static analysis, has
> backends for multiple processor types as well as C as a
> target, a lot of optimization tricks etc. See llvm.org. But
> frankly, I think they have lost the plot. C is basically a
> portable assembly programming language & in my highly b
On Wed, Feb 2, 2011 at 6:16 PM, Bakul Shah
> wrote:
> On Thu, 03 Feb 2011 00:52:35 GMT Charles Forsyth
> wrote:
> > > >$ size /usr/local/bin/clang
> > > > textdata bss dec hex filename
> > > >228428621023204 69200 2393526616d3922
> /usr/local/bin/clang
> >
>
On Thu, 03 Feb 2011 00:52:35 GMT Charles Forsyth wrote:
> > >$ size /usr/local/bin/clang
> > > textdata bss dec hex filename
> > >228428621023204 69200 2393526616d3922
> > >/usr/local/bin/clang
>
> i suppose a more useful comment might be a question:
> how do
On Wed, Feb 2, 2011 at 4:52 PM, Charles Forsyth wrote:
> i suppose a more useful comment might be a question:
> how does a C compiler get to be that big? what is all that code doing?
iterators, string objects, and a full set of C macros that ensure
boundary conditions and improve interfaces.
ro
> >$ size /usr/local/bin/clang
> > textdata bss dec hex filename
> >228428621023204 69200 2393526616d3922 /usr/local/bin/clang
i suppose a more useful comment might be a question:
how does a C compiler get to be that big? what is all that code doing?
On Wed Feb 2 19:19:13 EST 2011, fors...@terzarima.net wrote:
> >$ size /usr/local/bin/clang
> > textdata bss dec hex filename
> >228428621023204 69200 2393526616d3922 /usr/local/bin/clang
>
> impressive. certainly in the sense of `makes quite a dent if dropped'
>you'll hear people call [fringe benefits] "French Benefits".
i did not expect that! i'd have guessed: `cheese'.
>$ size /usr/local/bin/clang
> textdata bss dec hex filename
>228428621023204 69200 2393526616d3922 /usr/local/bin/clang
impressive. certainly in the sense of `makes quite a dent if dropped'.
On Wed, 2011-02-02 at 15:11 -0500, erik quanstrom wrote:
> > start := now();
> > while (now() < start + 2hours);
> >
> > You don't expect GC to be able to trigger, right?
>
> i sure do.
Ah. Interesting. Who's done that?
jcc
> start := now();
> while (now() < start + 2hours);
>
> You don't expect GC to be able to trigger, right?
i sure do.
- erik
On Wed, 2011-02-02 at 14:31 -0500, erik quanstrom wrote:
> > I don't follow. Garbage collection certainly can be done in a library
> > (e.g., Boehm). GC is in my experience normally triggered by
> >
> > * Allocation --- which is a function call in C
> > * Explicit call to the `garbag
On Feb 2, 2011, at 1:31 PM, erik quanstrom wrote:
> i think of it this way, the janitor doesn't insist that the factory shut
> down so he can sweep. he waits for the factory to be idle, and then
> sweeps.
Clearly I've been working on the wrong floors. That or all the janitors I know
are using
> I don't follow. Garbage collection certainly can be done in a library
> (e.g., Boehm). GC is in my experience normally triggered by
>
> * Allocation --- which is a function call in C
> * Explicit call to the `garbage collect now' entry point in the
> standard library. A fu
"BCPL makes C look like a very high-level language and provides
absolutely no type checking or run-time support."
B. Stroustrup, The Design and Evolution of C++, 1994
"C++ was designed to be used in a rather traditional compilation and
run-time environment, the C programming environment on the U
On Wed, 02 Feb 2011 09:45:56 PST David Leimbach wrote:
>
> Well if I were funded and had an infinite amount of time I'd think LLVM for
> Plan 9 would be excellent, as well as Go on LLVM :-).
llvm port would need c++.
$ size /usr/local/bin/clang
textdata bss dec hex filename
On Wed, 2011-02-02 at 13:21 -0500, erik quanstrom wrote:
> > A runtime system is just a library whose entry points are language
> > keywords.[1] In go, dynamic allocation, threads, channels, etc. are
> > accessed via language features, so the libraries that implement those
> > things are considere
On Wed, Feb 02, 2011 at 10:26:34AM -0800, David Leimbach wrote:
> On Wed, Feb 2, 2011 at 10:07 AM, wrote:
>
> > On Wed, Feb 02, 2011 at 09:47:01AM -0800, David Leimbach wrote:
> > >
> > > Wait, isn't it "the proof is in the *pudding*"? YOU MEAN WE DON'T GET
> > > FRENCH BENEFITS!?!
> >
> > Pleas
On Wednesday, February 2, 2011, erik quanstrom wrote:
>> Also, from this point of view, could pthreads be considered runtime for C?
>
> no. then every library/os function ever bolted onto
> c would be "part of the c runtime". clearly this isn't
> the case and pthreads are not specified in the c
> Also, from this point of view, could pthreads be considered runtime for C?
no. then every library/os function ever bolted onto
c would be "part of the c runtime". clearly this isn't
the case and pthreads are not specified in the c standard.
it might be part of /a/ runtime, but not the c runti
On Wed, Feb 2, 2011 at 10:21 AM, erik quanstrom wrote:
> > A runtime system is just a library whose entry points are language
> > keywords.[1] In go, dynamic allocation, threads, channels, etc. are
> > accessed via language features, so the libraries that implement those
> > things are considered
On Wed, Feb 2, 2011 at 10:03 AM, erik quanstrom wrote:
> > Where did your C compiler come from? Someone probably compiled it with a
> C
> > compiler. Bootstrapping is a fact of life as a new compiler can't just
> be
> > culled from /dev/random or willed into existence otherwise. It takes a
> pl
> A runtime system is just a library whose entry points are language
> keywords.[1] In go, dynamic allocation, threads, channels, etc. are
> accessed via language features, so the libraries that implement those
> things are considered part of the RTS. That's a terminological
> difference only fro
On Wed, Feb 2, 2011 at 10:07 AM, wrote:
> On Wed, Feb 02, 2011 at 09:47:01AM -0800, David Leimbach wrote:
> >
> > Wait, isn't it "the proof is in the *pudding*"? YOU MEAN WE DON'T GET
> > FRENCH BENEFITS!?!
>
> Please explain.
>
I was just pointing out something that happens a lot in our speech
On Wed, Feb 2, 2011 at 9:50 AM, erik quanstrom wrote:
> > Even C has a runtime. Perhaps you should look more into how programming
> > languages are implemented :-). C++ has one too, especially in the wake
> of
> > exceptions and such.
>
> really? what do you consider to be the c runtime?
>
i do
On Wed, 2011-02-02 at 12:50 -0500, erik quanstrom wrote:
> > Even C has a runtime. Perhaps you should look more into how programming
> > languages are implemented :-). C++ has one too, especially in the wake of
> > exceptions and such.
>
> really? what do you consider to be the c runtime?
> i d
On Wed, Feb 02, 2011 at 09:47:01AM -0800, David Leimbach wrote:
>
> Wait, isn't it "the proof is in the *pudding*"? YOU MEAN WE DON'T GET
> FRENCH BENEFITS!?!
Please explain.
--
Thierry Laronde
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250
1 - 100 of 117 matches
Mail list logo