On Friday, 16 June 2017 at 16:33:56 UTC, Russel Winder wrote:
gst-inspect-1.0 is an executable that comes with the
installation, however that is done. What are you thinking of
when saying "ported"?
gst-inspect is a good demonstration of iteration through the
available gstreamer elements and
On Friday, 16 June 2017 at 06:45:38 UTC, Russel Winder wrote:
Welcome to the group of people using GStreamer from D. I
suspect I may be the only other member of that club.
Looks like gst-inspect hasn't been ported... I'm looking at that
now.
wow! I hadn't tried this gtkd library before. I was hunting for
the gstreamer in particular.
The hello_world alsa-sink audio example failed on Windows. The
debugger indicates no sink, which I guess is reasonable.
With very little effort, though, I converted the hello_world
example to genera
I noticed some discussion of Cartesian indexes in Julia, where
the index is a tuple, along with some discussion of optimizing
the index created for cache efficiency. I could find foreach(ref
val, m.byElement()), but didn't find an example that returned a
tuple index. Is that supported?
htt
The tnfox cross-platform toolkit had some solution for per-thread
event loops. I believe this was the demo:
https://github.com/ned14/tnfox/blob/master/TestSuite/TestEventLoops/main.cpp
On Friday, 29 April 2016 at 10:10:26 UTC, sigod wrote:
How about `assumeSafeAppend`? Does it have any positive impact
on performance?
assumeSafeAppend made it even slower ... about 20x instead of 10x
worse than the indexed assign. Release build, win32.
I timed some code recently and found that .reserve made almost no
improvement when appending. It appears that the actual change to
the length by the append had a very high overhead of something
over 200 instructions executed, regardless if the .reserve was
done. This was a simple append to an
Seems like there should be an extra level to the version
statement, something like version(arch,x86).
I must be missing something about the intended use of the version
statement.
On Friday, 19 February 2016 at 14:26:25 UTC, Steven Schveighoffer
wrote:
Try ub[0].length = 3. You are trying to change the length on
one of the static arrays.
yes, right these compile. I was surpised it wouldn't accept the
append with just an int.
int[1][][1] ubb;
ubb[0].
On Friday, 19 February 2016 at 07:59:29 UTC, Jonathan M Davis
wrote:
.. Or you could do something really wonky like
auto arr = new int[][2][](5);
which would be a dynamic array of length 5 which holds static
arrays of length 2 which hold dynamic arrays which are null.
In my case, int [1]
Strange to me that this compiles, since I would expect there to
be some C-like limitation on the position of the unspecified
dimension. Is allowing this somehow useful?
int[1][][1] ub;
writeln("ub",ub);
I'm playing with the example below. I noticed a few things.
1. The ndslice didn't support the extra index, i, in the foreach,
so had to add extra i,j.
2. I couldn't figure out a way to use sliced on the original 'a'
array. Is slicing only available on 1 dim arrays?
3. Sliced parameter order is
On Monday, 11 January 2016 at 00:50:37 UTC, Ilya Yaroshenko wrote:
I will add such function. But it is not safe to do so (Slice
can have strides not equal to 1). So it is like a hack (&ret[0,
0, 0])[0 .. ret.elementsCount]).
Have you made comparison between my and yours parallel versions?
http
On Sunday, 10 January 2016 at 23:31:47 UTC, Ilya Yaroshenko wrote:
Just use normal arrays for buffer (median accepts array on
second argument for optimisation reasons).
ok, I think I see. I created a slice(numTasks, bigd) over an
allocated double[] dbuf, but slb[task] will be returning some
s
On Sunday, 10 January 2016 at 22:23:18 UTC, Ilya Yaroshenko wrote:
Could you please provide full code and error (git gists)? --
Ilya
ok, thanks.
I'm building with DMD32 D Compiler v2.069.2 on Win32. The
dub.json is included.
https://gist.github.com/jnorwood/affd05b69795c20989a3
I cut this median template from Jack Stouffer's article and was
attempting to use it in a parallel function. As shown, it
builds and execute correctly, but it failed to compile if I
attempting to use
medians[i] = median(vec,slb[task]);
in place of the
medians[i] = median(vec,dbuf[j .. k]);
On Sunday, 10 January 2016 at 03:23:14 UTC, Ilya wrote:
I will add significantly faster pairwise summation based on
SIMD instructions into the future std.las. --Ilya
Wow! A lot of overhead in the debug build. I checked the
computed values are the same. This is on my laptop corei5.
dub -b r
On Sunday, 10 January 2016 at 11:21:53 UTC, Marc Schütz wrote:
I'd say, if `shared` is required, but it compiles without, then
it's still a bug.
Yeah, probably so. Interestingly, without 'shared' and using a
simple assignment from a constant (means[i]= 1.0;), instead of
assignment from the
On Sunday, 10 January 2016 at 12:11:39 UTC, Russel Winder wrote:
foreach( dv; dvp){
if(dv != dv){ // test for NaN
return 1;
}
}
return(0);
}
I am not convinced these "Tests for NaN" actually test for NaN.
I
believe you have to use isNan(dv).
I s
On Sunday, 10 January 2016 at 01:54:18 UTC, Jay Norwood wrote:
ok, thanks. That works. I'll go back to trying ndslice now.
The parallel time for this case is about a 2x speed-up on my
corei5 laptop, debug build in windows32, dmd.
D:\ec_mars_ddt\workspace\nd8>nd8.exe
parallel time msec:2495
On Sunday, 10 January 2016 at 01:16:43 UTC, Ilya Yaroshenko wrote:
On Saturday, 9 January 2016 at 23:20:00 UTC, Jay Norwood wrote:
I'm playing around with win32, v2.069.2 dmd and
"dip80-ndslice": "~>0.8.8". If I convert the 2D slice with
.array(), should that first dimension then be compatible
On Sunday, 10 January 2016 at 00:47:29 UTC, Ilya Yaroshenko wrote:
This is a bug in std.parallelism :-)
ok, thanks. I'm using your code and reduced it a bit. Looks
like it has some interaction with executing vec.sum. If I
substitute a simple assign of a double value, then all the values
a
On Sunday, 10 January 2016 at 00:41:35 UTC, Ilya Yaroshenko wrote:
It is a bug (Slice or Parallel ?). Please fill this issue.
Slice should work with parallel, and array of slices should
work with parallel.
Ok, thanks, I'll submit it.
for example,
means[63] through means[251] are consistently all NaN when using
parallel in this test, but are all computed double values when
parallel is not used.
I'm playing around with win32, v2.069.2 dmd and "dip80-ndslice":
"~>0.8.8". If I convert the 2D slice with .array(), should that
first dimension then be compatible with parallel foreach?
I find that without using parallel, all the means get computed,
but with parallel, only about half of the
On Saturday, 9 January 2016 at 16:00:51 UTC, cym13 wrote:
I may be very naive but how is the second form more complicated
than the first?
Pretending these were regular function implementations ...
1000.
1000.iota.
1000.iota.sliced(
iota(
sliced(
sliced(iota(
I wouldn't be surprised if
I'm reading Jack Stouffer's documentation:
http://jackstouffer.com/blog/nd_slice.html
considering the UFCS example below and how it would impact
auto-completion support.
auto slice = sliced(iota(1000), 5, 5, 40);
auto slice = 1000.iota.sliced(5, 5, 40);
Seems like auto-complete support for t
On Sunday, 27 December 2015 at 23:42:57 UTC, Ali Çehreli wrote:
That does not compile because i is size_t but apply_metrics()
takes an int. One solution is to call to!int:
foreach( i, ref a; parallel(samples[])){
apply_metrics(i.to!int,a);}
It builds for me still, and executes ok,
On Sunday, 27 December 2015 at 23:42:57 UTC, Ali Çehreli wrote:
On 12/27/2015 11:30 AM, Jay Norwood wrote:
> samples[].each!((int i, ref a)=>apply_metrics(i,a));
Are you using an older compiler? That tuple expansion does not
work any more at least with dmd v2.069.0 but you can use
enumer
I'm doing some re-writing and measuring. The basic task is to
take 10K samples (in struct S samples below) and calculate some
metrics (just per sample for now). It isn't evident to me how to
write the parallel foreach in the same format as each!, so I just
used the loop form that I understood
On Sunday, 27 December 2015 at 07:40:55 UTC, Ali Çehreli wrote:
It looks like you need map(), not each():
import std.algorithm;
import std.typecons;
import std.array;
void main() {
auto a = [ 1, 2 ];
auto arr = a.map!(e => tuple(2 * e, e * e)).array;
static assert(is(typeof(arr) =
This is getting kind of a long example, but I'm really only
interested in the last 4 or 5 lines. This works as desired,
creating the array of tuples, but I'm wondering if there is a way
to have the Tuple array defined as auto instead of having to
specify the types. I tried using .array() at th
On Sunday, 27 December 2015 at 03:22:50 UTC, Jay Norwood wrote:
I would probably want to associate names with the tuple metric
results, and I've seen that somewhere in the docs in parameter
tuples. I suppose I'll try those in place of the current
tuple ...
This worked to associate names w
I'm playing around with something also trying to apply multiple
functions.
In my case, a sample is some related group of measurements taken
simultaneously, and I'm calculating a group of metrics from the
measured data of each sample.
This produces the correct results for the input data, and it
On Sunday, 27 December 2015 at 00:20:51 UTC, Ali Çehreli wrote:
On 12/26/2015 12:11 PM, karthikeyan wrote:
> I read http://ddili.org/ders/d.en/input.html and inserted a
space before %s
> but still no use. Am I missing something here with the latest
version?
The answer is nine chapters later. :)
On Saturday, 26 December 2015 at 20:19:08 UTC, Adam D. Ruppe
wrote:
On Saturday, 26 December 2015 at 20:11:27 UTC, karthikeyan
wrote:
I experience the same as the OP on Linux Mint 15 with dmd2.069
and 64 bit machine. I have to press enter twice to get the
output. I read http://ddili.org/ders/d
On Saturday, 26 December 2015 at 20:38:52 UTC, tcak wrote:
On Saturday, 26 December 2015 at 20:19:08 UTC, Adam D. Ruppe
wrote:
On Saturday, 26 December 2015 at 20:11:27 UTC, karthikeyan
wrote:
I experience the same as the OP on Linux Mint 15 with
dmd2.069 and 64 bit machine. I have to press en
On Saturday, 26 December 2015 at 19:52:15 UTC, Adam D. Ruppe
wrote:
On Saturday, 26 December 2015 at 19:40:59 UTC, Jay Norwood
wrote:
Simple VS console app in D.
If you are running inside visual studio, you need to be aware
that output will be block buffered, not line buffered, because
VS pi
Simple VS console app in D. Reading lines to a string variable
interactively. Object is to have no extra blank lines in the
console output. Seems very broken for this use, requiring two
extra "enter" entries before the outputs both appear. Version
DMD32 D Compiler v2.069.2
import std.stdio;
On Tuesday, 22 December 2015 at 01:13:54 UTC, Jack Stouffer wrote:
The problem is that t3 is slicing a1 which is a dynamic array,
which is a range, while t4 is trying to slice a static array,
which is not a range.
ok, thanks. I lost track of the double meaning of static ... I
normally think
The autocompletion doesn't work here to offer epu_ctr in the
writeln statement either, so it doesn't seem to be a problem with
number of subscripts. writeln(a1[0]. does offer epu_ctr for
completion at the same place.
import std.stdio;
import std.experimental.ndslice;
import std.experimenta
I'm trying to determine if the debugger autocompletion would be
useful in combination with ndslice. I find that using visualD I
get offered no completion to select core_ctr or epu_ctr where
epu_ctr is used in the writeln below.
I take it this either means that there is some basic limitation
I'm trying to learn ndslice. It puzzles me why t3 compiles ok,
but t4 causes a compiler error in the example below. Should I be
able to slice a struct member that is an array?
import std.stdio;
import std.experimental.ndslice;
import std.experimental.ndslice.iteration: transposed;
struct samp
So, the extra confusion of the typeof(iota) Result return goes
away when slicing arrays.
auto a1 = new int[100];
auto t3 = a1.sliced(3,4,5);
pragma(msg,typeof(t3)); //This prints Slice!(3u, int*)
Slice!(3u, int*) t4 = a1.sliced(3,4,5); // and this works ok
On Monday, 21 December 2015 at 04:39:23 UTC, drug wrote:
You can use
alias Type = typeof(t0);
Type t1 = 1000.iota.sliced(3, 4, 5);
IIRC Result is the Voldemort type. You can think of it as a
detail of implementation of ndslice that isn't intended to be
used by a ndslice user directly.
ok, we
import std.stdio;
import std.experimental.ndslice;
void main() {
import std.algorithm.iteration: map;
import std.array: array;
import std.range;
import std.traits;
auto t0 = 1000.iota.sliced(3, 4, 5);
pragma(msg, typeof(t0));
Slice!(3u, Result) t1 = 1000.iota.slic
I pulled down the std.experimental.ndslice examples and am
attempting to build some of the examples and understand the types
being used.
I know don't need all these imports, but it is hard to guess
which ones are needed, and the examples often don't provide them,
which I suspect is a common g
On Saturday, 21 November 2015 at 14:16:26 UTC, Laeeth Isharc
wrote:
Not sure it is a great idea to use a variant as the basic
option when very often you will know that every cell in a
particular column will be of the same type.
I'm reading today about an n-dim extension to pandas named xray
On Wednesday, 18 November 2015 at 22:46:01 UTC, jmh530 wrote:
My sense is that any data frame implementation should try to
build on the work that's being done with n-dimensional slices.
I've been watching that development, but I don't have a feel for
where it could be applied in this case, sin
One more discussion link on the NA subject. This one on the R
implementation of NA using a single encoding of NaN, as well as
their treatment of a selected integer value as a NA.
http://rsnippets.blogspot.com/2013/12/gnu-r-vs-julia-is-it-only-matter-of.html
On Wednesday, 18 November 2015 at 18:04:30 UTC, Jay Norwood wrote:
vector. I'll try to find the discussions and post the link.
Here are the two discussions I recall on the julia NA
implementation.
http://wizardmac.tumblr.com/post/104019606584/whats-wrong-with-statistics-in-julia-a-reply
htt
On Wednesday, 18 November 2015 at 17:15:38 UTC, Laeeth Isharc
wrote:
What do you think about the use of NaN for missing floats? In
theory I could imagine wanting to distinguish between an NaN in
the source file and a missing value, but in my world I never
felt the need for this. For integers
I looked through the dataframe code and a couple of comments...
I had thought perhaps an app could read in the header info and
type info from hdf5, and generate D struct definitions with
column headers as symbol names. That would enable faster
processing than with the associative arrays, as w
On Monday, 2 November 2015 at 15:33:34 UTC, Laeeth Isharc wrote:
Hi Jay.
That may have been me. I have implemented something very
basic, but you can read and write my proto dataframe to/from
CSV and HDF5. The code is up here:
https://github.com/Laeeth/d_dataframes
yes, thanks. I believ
I was reading about the Julia dataframe implementation yesterday,
trying to understand their decisions and how D might implement.
From my notes,
1. they are currently using a dictionary of column vectors.
2. for NA (not available) they are currently using an array of
bytes, effectively as a Boo
This is another attempt with the metric parallel processing. This
uses the results only to return an int value, which could be used
later as an error return value. The metric value locations are
now allocated as a part of the input measurement values tuple.
The Tuple vs struct definitions see
I re-submitted this as:
https://issues.dlang.org/show_bug.cgi?id=15135
So, this is a condensed version of the original problem. It looks
like the problem is that the return value for taskPool.amap can't
be a tuple of tuples or a tuple of struct. Either way, it fails
with the Wrong buffer type error message if I uncomment the
taskPool line
import std.algorithm,
On Thursday, 1 October 2015 at 18:08:31 UTC, Ali Çehreli wrote:
However, if you prove to yourself that the result tuple and
your struct have the same memory layout, you can cast the tuple
slice to struct slice after calling amap:
After re-reading your explanation, I see that the problem is onl
On Thursday, 1 October 2015 at 18:08:31 UTC, Ali Çehreli wrote:
Makes sense. Please open a bug at least for investigation why
tuples with named members don't work with amap.
ok, thanks. I opened the issue.
https://issues.dlang.org/show_bug.cgi?id=15134
On Thursday, 1 October 2015 at 07:03:40 UTC, Ali Çehreli wrote:
Looks like a bug. Workaround: Get rid of member names
Thanks. My particular use case, working with metric expressions,
is easier to understand if I use the names. I converted the use
of Tuple to struct to see if I could get an
This compiles and appears to execute correctly, but if I
uncomment the taskPool line I get a compile error message about
wrong buffer type. Am I breaking some rule for
std.parallelism.amap?
import std.algorithm, std.parallelism, std.range;
import std.stdio;
import std.datetime;
import std.typ
On Wednesday, 30 September 2015 at 22:24:25 UTC, Jay Norwood
wrote:
// various metric definitions
// the Tuples could also define names for each member and use
the names here in the metrics.
long met1( TI m){ return m[0] + m[1] + m[2]; }
long met2( TI m){ return m[1] + m[2] + m[3]; }
long met3(
This is something I'm playing with for work. We do this a lot,
capture counter events for some number of on-chip performance
counters, compute some metrics, display the outputs. This seems
ideal for the application.
import std.algorithm, std.parallelism, std.range;
import std.stdio;
import std
On Saturday, 26 September 2015 at 15:56:54 UTC, Jay Norwood wrote:
This results in a compile error:
auto sum3 = taskPool.reduce!"a + b"(iota(1UL,101UL));
I believe there was discussion of this problem recently ...
https://issues.dlang.org/show_bug.cgi?id=14832
https://issues.dlang.org/sho
This is a work-around to get a ulong result without having the
ulong as the range variable.
ulong getTerm(int i)
{
return i;
}
auto sum4 = taskPool.reduce!"a +
b"(std.algorithm.map!getTerm(iota(11)));
btw, on my corei5, in debug build,
reduce (using double): 11msec
non_parallel: 37msec
parallel with atomicOp: 123msec
so, that is the reason for using parallel reduce, assuming the
ulong range thing will get fixed.
std.parallelism.reduce documentation provides an example of a
parallel sum.
This works:
auto sum3 = taskPool.reduce!"a + b"(iota(1.0,101.0));
This results in a compile error:
auto sum3 = taskPool.reduce!"a + b"(iota(1UL,101UL));
I believe there was discussion of this problem recently .
On Monday, 7 September 2015 at 15:48:56 UTC, BBasile wrote:
On Sunday, 6 September 2015 at 23:05:29 UTC, Jonathan M Davis
For example you can retieve the flags:
archive/readonly/hidden/system/indexable(?) and even if it
looks writable or readable, the file won't be open at all
because the ACL
On Sunday, 9 August 2015 at 19:10:01 UTC, Binarydepth wrote:
On Sunday, 9 August 2015 at 16:42:16 UTC, Jay Norwood wrote:
Oooh... I like how this works
import std.stdio : writeln, readf;
void main() {
immutable a=5;
int[a] Arr;
int nim;
foreach(num, ref nem; A
On Sunday, 9 August 2015 at 15:37:23 UTC, Binarydepth wrote:
So I should use the REF like this ?
import std.stdio : writeln;
void main() {
immutable a=5;
int[a] Arr;
foreach(num; 0..a) {
Arr[num] = num;
}
foreach(num, ref ele; Arr)
On Sunday, 9 August 2015 at 10:40:06 UTC, Nordlöw wrote:
On Sunday, 9 August 2015 at 00:50:16 UTC, Ali Çehreli wrote:
Ali
Now benchmarks write and read separately:
I benchmarked my first results:
D:\visd\raw\raw\Release>raw
time write msecs:457
time read msecs:75
This is for 160MB of data
On Sunday, 9 August 2015 at 11:06:34 UTC, Nordlöw wrote:
On Sunday, 9 August 2015 at 10:40:06 UTC, Nordlöw wrote:
Couldn't the chunk logic be deduced aswell?
Yes :)
See update at:
https://github.com/nordlow/justd/blob/a633b52876388921ec49c189f374746f7b4d8c93/tests/t_rawio.d
What would a su
On Sunday, 9 August 2015 at 00:50:16 UTC, Ali Çehreli wrote:
// NOTE: No need to tell rawRead the type as double
iota(10, 20_000_000 + 10, n)
.each!(a => f.rawRead(dbv));
}
Ali
Your f.rawRead(dbv) form compiles, but f.rawRead!(dbv) results in
an error msg in c
On Sunday, 9 August 2015 at 00:50:16 UTC, Ali Çehreli wrote:
{
auto f = File(fn,"wb");
iota(10.5, 20_000_010.5, 1.0)
.chunks(100)
.each!(a => f.rawWrite(a.array));
}
Ali
Thanks. There are many examples of numeric to string data output
in t
On Saturday, 8 August 2015 at 18:28:25 UTC, Binarydepth wrote:
This is the new code :
foreach(num; 0..liEle) {//Data input loop
write("Input the element : ", num+1, " ");
readf(" %d", &liaOrig[num]);
}
Even better :
foreach(num; 0..liaOrig.len
I'm playing around with the range based operations and with raw
file io. I couldn't figure out a way to get rid of the outer
foreach loops.
Nice execution time of 537 msec for this, which creates and reads
back a file of about 160MB (20_000_000 doubles).
import std.algorithm;
import std.st
Unfortunately, this is not a very good example for
std.parallelism, since the measured times are better using the
std.algorithm.map calls. I know from past experience that
std.parallelism routines can work well when the work is spread
out correctly, so this example could be improved.
This is
and, finally, this works using the taskPool.map, as in the
std.parallelism example. So, the trick appears to be that the
call to chomp is needed.
auto lineRange = File(fn).byLineCopy();
auto chomped = std.algorithm.map!"a.chomp"(lineRange);
auto nums = taskPool.map!(to!
I tried to create a working example from the std.parallelism
taskPool.map code, and it throws with empty strings with length 1
being passed to to!double. Anyone have a working example? I'm
building on Windows with 2.067.1 dmd.
import std.parallelism;
import std.algorithm;
import std.stdio;
i
On Friday, 7 August 2015 at 18:51:45 UTC, Steven Schveighoffer
wrote:
On 8/7/15 2:37 PM, Steven Schveighoffer wrote:
I'll file a bug on this.
https://issues.dlang.org/show_bug.cgi?id=14886
-Steve
Thanks. The workaround works ok.
This also works.
auto sm = File(fn).byLineCopy()
.map!"a.chomp"()
.map!(to!double)
.map!"a.log10"()
.sum();
writeln("sum=",sm);
This appears to work ... at least, no exception:
auto sm = File(fn).byLine(KeepTerminator.no)
.map!"a.chomp"()
.map!"a.idup"()
.map!(to!double)
.map!"a.log10"()
.sum();
writeln("sum=",sm);
This appears to hang up dmd compiler 2.067.1. Changing
parallel(s) to s works ok. Is this a known problem?
import std.stdio;
import std.string;
import std.format;
import std.range;
import std.parallelism;
int main(string[] argv)
{
string s[10];
foreach (i, ref si ; parallel
On Sunday, 24 May 2015 at 18:14:19 UTC, anonymous wrote:
"Static array" has a special meaning. It does not mean "static
variable with an array type". Static arrays are those of the
form Type[size]. That is, the size is known statically.
Examples:
1) static int[5] x; -- x is a static variable
I'm a bit confused by the documentation of the ctfe limitations
wrt static arrays due to these seemingly conflicting statements,
and the examples didn't seem to clear anything up. I was
wondering if anyone has examples of clever things that might be
done with static arrays and pointers using c
This library allow to specify the internal base of the arbitrary
precision numbers( default is decimal), as well as allows
specification of the precision of floating point values. Each
floating point number precision can be read with .precision().
Also supports specification of rounding modes
On Friday, 26 September 2014 at 03:32:46 UTC, Jay Norwood wrote:
On Wednesday, 24 September 2014 at 10:28:05 UTC, Suliman wrote:
string path = thisExePath()
Seems like "dirName" in std.path is a good candidate ;)
http://dlang.org/phobos/std_path.html#.dirName
You'll find many other path manip
On Wednesday, 24 September 2014 at 10:28:05 UTC, Suliman wrote:
string path = thisExePath()
Seems like "dirName" in std.path is a good candidate ;)
http://dlang.org/phobos/std_path.html#.dirName
You'll find many other path manipulation functions there.
Thanks! But if I want to strip it, how
I have a use case that requires repeating performance
measurements of blocks of code that do not coincide with function
start and stop. For example, a function will be calling several
sub-operations, and I need to measure the execution from the
call statement until the execution of the state
On Friday, 25 July 2014 at 21:10:56 UTC, monarch_dodra wrote:
Functionally nothing more than an alias? EG:
{
alias baz = foo.bar;
...
}
Yes, it is all just alias. So
with ( (d,e,a,b,c) as (ar.rm.a, ar.rm.b, ar.r.a, ar.r.b, ar.r.c)){
d = a + c;
e = (c==0)?0:(a+b)/c;
}
could
On Friday, 25 July 2014 at 01:54:53 UTC, Jay Norwood wrote:
I don't recall the exact use case for the database expressions,
but I believe they were substituting a simple symbol for the
fully qualified object.
The sql with clause is quite a bit different than I remembered.
For one thing, I ha
On Thursday, 24 July 2014 at 20:16:53 UTC, monarch_dodra wrote:
Or did I miss something?
Yes, sorry, I should have pasted a full example previously. The
code at the end is with the Raw_met members renamed (they were
originally a and b but clashed).
So, if Raw_met members were still a and b
I was playing around with use of the dual WITH statement. I
like the idea, since it makes the code within the with cleaner.
Also, I got the impression from one of the conference
presentations ... maybe the one on the ARM debug ... that there
are some additional optimizations available that
On Tuesday, 22 April 2014 at 15:25:04 UTC, monarch_dodra wrote:
Yeah, that's because join actually works on "RoR, R", rather
than "R, E". This means if you feed it a "string[], string",
then it will actually iterate over individual *characters*. Not
only that, but since you are using char[], it
Wow, joiner is much slower than join. Such a small choice can
make this big of a difference. Not at all expected, since the
lazy calls, I thought, were considered to be more efficient.
This is with ldc2 -O2.
jay@jay-ubuntu:~/ec_ddt/workspace/diamond/source$ ./main
1>/dev/null
brad: time:
On Monday, 21 April 2014 at 08:26:49 UTC, monarch_dodra wrote:
The two "key" points here, first, is to avoid using appender.
Second, instead of having two buffer: "" and "**\n",
and two do two "slice copies", to only have 1 buffer "
*", and to do 1 slice copy, and a single '\n' w
On Tuesday, 25 March 2014 at 08:42:30 UTC, monarch_dodra wrote:
Interesting. I'd have thought the "extra copy" would be an
overall slowdown, but I guess that's not the case.
I installed ubuntu 14.04 64 bit, and measured some of these
examples using gdc, ldc and dmd on a corei3 box. The ex
98 matches
Mail list logo