dmd as a library for scripting/JIT?

2018-09-14 Thread dennis luehring via Digitalmars-d
i've got user defined flow charts in my C++ application that calling 
C/C++ Code - could be possible to embedd dmd as a library, generate D 
code out of my flow charts and execute the "compiled" code directly 
without doing file io or dmd.exe runs to create dlls that i hot reload?


Re: The nail in the coffin of C++ or why don't GO there...

2017-03-30 Thread dennis luehring via Digitalmars-d

Am 30.03.2017 um 08:58 schrieb Ervin Bosenbacher:

That is the same, that came as a shock to me.


most compilers (for many languages) can optimize your super-trivial 
example down to nothing - for at least the last 10 years or more


so whats the point? you're talkin about "performance is critical for me"
but missing even minor knowledge about todays compiler powers?

for a benchmark you need:
-loops, running millions of times, preventing IO and do now fall into 
the completely-optimized-to-nothing-trap etc.




Re: Rant after trying Rust a bit

2015-07-24 Thread dennis luehring via Digitalmars-d

Am 23.07.2015 um 22:47 schrieb Ziad Hatahet via Digitalmars-d:

Having expressions be built-in extends beyond the simple if/else case


and allowes const correctness without functions


Re: PHP verses C#.NET verses D.

2015-06-22 Thread dennis luehring via Digitalmars-d
you should stay with PHP + C# or migrated to pure C# if you need to ask 
such a question here (without giving any infos about what the co-workers 
understand, the real size of the project is, etc.)


Am 16.06.2015 um 01:53 schrieb Nick B:

Hi.

There is a startup in New Zealand that I have some dealings with
at present. They have build most of their original code in PHP,
(as this was quick and easy) but they also use some C#.net for
interfacing to accounting appls on clients machines. The core PHP
application runs in the cloud at present and talks to accountings
applications in the cloud. They use the PHP symfony framework.

High speed in not important, but accuracy, error handling, and
scalability is, as they are processing accounting transactions.
They have a new CEO on board, and he would like to review the
companies technical direction.

Their client base is small but growing quickly.  I know that PHP
is not a great language, and my knowledge of D is reasonable,
while I have poor knowledge of C#.net.

Looking to the future, as volumes grow, they could:
1.  Stay with PHP  C#.net, and bring on servers as volumes grow.
2.  Migrate to C#.net in time
3.  Migrate to D in time.

Any comments or suggestions on the above?





Re: module win32.winioctl :IOCTL_STORAGE_EJECT_MEDIA' Value is Error

2014-12-30 Thread dennis luehring via Digitalmars-d

Am 30.12.2014 um 04:03 schrieb FrankLike:

On Monday, 29 December 2014 at 12:19:34 UTC, dennis luehring
wrote:

Am 29.12.2014 um 13:00 schrieb FrankLike:

Now,I use the win32.winioctl.d file,find
:IOCTL_STORAGE_EJECT_MEDIA ' Value is 0x0202,if you use it
,will
get the error value 50.(by GetLastError()).

It should be 0x2d4808.If you use it ,it works ok.

Why have this kind of mistake?

Frank



maybe just a bug

but
https://github.com/Diggsey/druntime-win32/blob/master/winioctl.d
seems to be correctly defined

IOCTL_STORAGE_EJECT_MEDIA = CTL_CODE_T!(IOCTL_STORAGE_BASE,
0x0202, METHOD_BUFFERED, FILE_READ_ACCESS),


Sorry,I've known what's wrong with it.
Should do similar to C++:  import win32.winioctl;
Don't similar to c#:  public  uint IOCTL_STORAGE_EJECT_MEDIA =
0x2d4808;




so it was your fault not using the IOCTL_STORAGE_EJECT_MEDIA as defined 
in the import where do you get the 0x0202 value from


your questions, problems AND solutions are always very hard to understand


Re: module win32.winioctl :IOCTL_STORAGE_EJECT_MEDIA' Value is Error

2014-12-29 Thread dennis luehring via Digitalmars-d

Am 29.12.2014 um 13:00 schrieb FrankLike:

Now,I use the win32.winioctl.d file,find
:IOCTL_STORAGE_EJECT_MEDIA ' Value is 0x0202,if you use it ,will
get the error value 50.(by GetLastError()).

It should be 0x2d4808.If you use it ,it works ok.

Why have this kind of mistake?

Frank



maybe just a bug

but https://github.com/Diggsey/druntime-win32/blob/master/winioctl.d 
seems to be correctly defined


IOCTL_STORAGE_EJECT_MEDIA = CTL_CODE_T!(IOCTL_STORAGE_BASE, 0x0202, 
METHOD_BUFFERED, FILE_READ_ACCESS),





Re: D2 port of Sociomantic CDGC available for early experiments

2014-10-10 Thread dennis luehring via Digitalmars-d-announce

Am 11.10.2014 06:25, schrieb Andrei Alexandrescu:

On 10/10/14, 7:54 PM, Walter Bright wrote:

On 10/10/2014 5:45 PM, Leandro Lucarella wrote:

I still don't understand why wouldn't we use environment variables for
what they've been created for, it's foolish :-)


Because using environment variables to tune program X will also affect
programs A-Z.


Nope. Try this at your Unix command prompt:

echo $CRAP
CRAP=hello echo $CRAP
CRAP=world echo $CRAP


in windows there are user-environment-variables (walter talking about) 
and shell-environment variables (like in your example)

setting user-environement variables will affect every program
thats why java is not using them



Re: DConf 2014 Keynote: High Performance Code Using D by Walter Bright

2014-07-18 Thread dennis luehring via Digitalmars-d-announce

Am 18.07.2014 07:54, schrieb Walter Bright:

On 7/17/2014 9:40 PM, dennis luehring wrote:

i understand your focus on dmd - but talking about fast code and optimizing
WITHOUT even trying to compare with other compiler results is just a little bit
strange for someone who stated speed = money


The point was to get people to look at the asm output of the compiler, as
results can be surprising (as you've also discovered).


...of the compilerS - please :)

can you post your (full, closed) D array access example from the talk
so i don't need to play around with the optimizer to get your asm results



Re: DConf 2014 Keynote: High Performance Code Using D by Walter Bright

2014-07-17 Thread dennis luehring via Digitalmars-d-announce

Am 18.07.2014 04:52, schrieb Walter Bright:

On 7/16/2014 7:21 AM, dennis luehring wrote:

can you give an short (working) example code to show the different resulting
assembler for your for-rewrite example - and what compilers your using for
testing - only dmd or gdc?


I used dmd.



i sometimes got the feeling that you underestimate the sheer power of 
todays clang or gcc optimizers - so partly what gdc/ldc can do with your 
code


reminds me of brian schotts exmaple of his sse2 optimized version of his 
lexer - the dmd generated was much faster then the normal version, but 
gdc/ldc results of the unoptimized versions are still 50% faster


i understand your focus on dmd - but talking about fast code and 
optimizing WITHOUT even trying to compare with other compiler results is 
just a little bit strange for someone who stated speed = money






Re: DConf 2014 Keynote: High Performance Code Using D by Walter Bright

2014-07-16 Thread dennis luehring via Digitalmars-d-announce

Am 15.07.2014 18:20, schrieb Andrei Alexandrescu:

http://www.reddit.com/r/programming/comments/2aruaf/dconf_2014_keynote_high_performance_code_using_d/

https://www.facebook.com/dlang.org/posts/885322668148082

https://twitter.com/D_Programming/status/489081312297635840


Andrei



@Walter

can you give an short (working) example code to show the different 
resulting assembler for your for-rewrite example - and what compilers 
your using for testing - only dmd or gdc?


this example:

T[10] array
for(int i = 0; i  10; ++i) foo(array[i])

i've tested some combination on
http://gcc.godbolt.org/ with clang 3.4.1 and gcc4.9x

and i can't see any difference


Re: GDC/ARM: Help needed: Porting std.math.internal.gammafunction

2014-07-03 Thread dennis luehring via Digitalmars-d

Am 03.07.2014 17:33, schrieb Johannes Pfau:

Hi,

std.math.internal.gammafunction is the last module with failing
unittest on ARM, simply because it assumes that reals are always in
x86 extended precision format which is obviously not true on ARM.


OT question:

can you also check big endian behavior with your ARM system
(and maybe maybe unaligned accesses) if possible -
i think ARM can be configure for beeing big endian (and 
unaligned-unware) - or?


Re: std.math performance (SSE vs. real)

2014-06-30 Thread dennis luehring via Digitalmars-d

Am 30.06.2014 18:30, schrieb dennis luehring:

Am 30.06.2014 08:21, schrieb Walter Bright:

The only way I know to access x87 is with inline asm.


I suggest using long double on Linux and look at the compiler output. You
don't have to believe me - use gcc or clang.


gcc.godbolt.org clang 3.4.1 -O3


better this

int main(int argc, char** argv)
{
  return argc * 12345.6789L;
}

.LCPI0_0:
# x86_fp80 12345.67889996
.quad   -4546745350290602879

.short  16396
.zero   6
main:   # @main
movl%edi, -8(%rsp)
fldt.LCPI0_0(%rip)
fimull  -8(%rsp)
fnstcw  -10(%rsp)
movw-10(%rsp), %ax
movw$3199, -10(%rsp)# imm = 0xC7F
fldcw   -10(%rsp)
movw%ax, -10(%rsp)
fistpl  -4(%rsp)
fldcw   -10(%rsp)
movl-4(%rsp), %eax
ret



Re: std.math performance (SSE vs. real)

2014-06-30 Thread dennis luehring via Digitalmars-d

Am 30.06.2014 08:21, schrieb Walter Bright:

The only way I know to access x87 is with inline asm.


I suggest using long double on Linux and look at the compiler output. You
don't have to believe me - use gcc or clang.


gcc.godbolt.org clang 3.4.1 -O3

int main(int argc, char** argv)
{
  return ((long double)argc/12345.6789);
}

asm:

.LCPI0_0:
.quad   4668012723080132769 # double 12345.67890001
main:   # @main
movl%edi, -8(%rsp)
fildl   -8(%rsp)
fdivl   .LCPI0_0(%rip)
fnstcw  -10(%rsp)
movw-10(%rsp), %ax
movw$3199, -10(%rsp)# imm = 0xC7F
fldcw   -10(%rsp)
movw%ax, -10(%rsp)
fistpl  -4(%rsp)
fldcw   -10(%rsp)
movl-4(%rsp), %eax
ret





Re: std.math performance (SSE vs. real)

2014-06-30 Thread dennis luehring via Digitalmars-d

Am 01.07.2014 00:18, schrieb Andrei Alexandrescu:

On 6/30/14, 2:20 AM, Don wrote:

For me, a stronger argument is that you can get *higher* precision using
doubles, in many cases. The reason is that FMA gives you an intermediate
value with 128 bits of precision; it's available in SIMD but not on x87.

So, if we want to use the highest precision supported by the hardware,
that does *not* mean we should always use 80 bits.

I've experienced this in CTFE, where the calculations are currently done
in 80 bits, I've seen cases where the 64-bit runtime results were more
accurate, because of those 128 bit FMA temporaries. 80 bits are not
enough!!


Interesting. Maybe we should follow a simple principle - define
overloads and intrinsic operations such that real is only used if (a)
requested explicitly (b) it brings about an actual advantage.


gcc seems to use GMP for (all) its compiletime calculations - is this 
for cross-compile unification of calculation results or just for better 
result at all - or both?




Re: Module level variable shadowing

2014-06-29 Thread dennis luehring via Digitalmars-d

Am 29.06.2014 08:06, schrieb Kapps:

struct Foo {
  int a;
  this(this.a) { }
}


a parameter declaration with the name of the scope name??? totaly 
different to everything else???


Re: Module level variable shadowing

2014-06-28 Thread dennis luehring via Digitalmars-d

Am 28.06.2014 07:11, schrieb H. S. Teoh via Digitalmars-d:

On Sat, Jun 28, 2014 at 06:37:08AM +0200, dennis luehring via Digitalmars-d 
wrote:

Am 27.06.2014 20:09, schrieb Kapps:

[...]

struct Foo {
   int a;
   this(int a) {
   this.a = a;
   }
}


forgot that case - but i don't like how its currently handled, maybe
no better way - its just not perfect :)


Actually, this particular use case is very bad. It's just inviting
typos, for example, if you mistyped int a as int s, then you get:

struct Foo {
int a;
this(int s) {
this.a = a; // oops, now it means this.a = this.a
}
}

I used to like this shadowing trick, until one day I got bit by this
typo. From then on, I acquired a distaste for this kind of shadowing.
Not to mention, typos are only the beginning of troubles. If you copy a
few lines from the ctor into another method (e.g., to partially reset
the object state), then you end up with a similar unexpected rebinding
to this.a, etc..

Similar problems exist in nested functions:

auto myFunc(A...)(A args) {
int x;
int helperFunc(B...)(B args) {
int x = 1;
return x + args.length;
}
}

Accidentally mistype B args or int x=1, and again you get a silent
bug. This kind of shadowing is just a minefield of silent bugs waiting
to happen.

No thanks!


T



thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters 
etc.) or give a compile error asking for this.x or .x (maybe problematic 
with inner structs/functions)


but that could be a problem for C/C++ code porting - but is that such a 
big problem?





Re: Module level variable shadowing

2014-06-28 Thread dennis luehring via Digitalmars-d

Am 28.06.2014 11:30, schrieb Jacob Carlborg:

On 2014-06-28 08:19, dennis luehring wrote:


thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters
etc.) or give a compile error asking for this.x or .x (maybe problematic
with inner structs/functions)


I think, in general, if you need to prefix/suffix any symbols name,
there's something wrong with the language.


i agree 100% - i just try to overcome the shadowing clean with this AND 
have also scope information in the name (i just want to know at every 
palce in code if someing is an parameter)


but i would always prefer a better working method



Re: Module level variable shadowing

2014-06-28 Thread dennis luehring via Digitalmars-d

Am 28.06.2014 14:20, schrieb Ary Borenszweig:

On 6/28/14, 6:30 AM, Jacob Carlborg wrote:

On 2014-06-28 08:19, dennis luehring wrote:


thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters
etc.) or give a compile error asking for this.x or .x (maybe problematic
with inner structs/functions)


I think, in general, if you need to prefix/suffix any symbols name,
there's something wrong with the language.


In Ruby the usage of a variable is always prefixed: `@foo` for instance
vars, `$foo` for global variable, `FOO` for constant. You can't make a
mistake. It's... perfect :-)



i like the ruby-way


Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 26.06.2014 02:41, schrieb Walter Bright:

On 6/25/2014 4:03 PM, bearophile wrote:

The simplest way to avoid that kind of bugs is give a shadowing global x error
(similar to the shadowing errors D gives with foreach and with statements). But
this breaks most existing D code.


D has scoped lookup. Taking your proposal as principle, where do we stop at
issuing errors when there is the same identifier in multiple in-scope scopes? I
think we hit the sweet spot at restricting shadowing detection to local scopes.

I suggest that your issues with global variables can be mitigated by adopting a
distinct naming convention for your globals. Frankly, I think a global variable
named x is execrable style - such short names should be reserved for locals.



what about adding tests -no-global-shadowing (or others) to dmd and tell 
people to use it - poeple will definitly change there global names then 
(like your advised of renameing or using .x etc) and after a while it 
could become a warning, then an error - like the time between 
deprecation and removal of an feature - D need more strategies then C++ 
to add better qualitity over time




Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 10:20, schrieb dennis luehring:

I

think we hit the sweet spot at restricting shadowing detection to local scopes.


sweet does not mean - use a better name or .x to avoid manualy hard to 
detect problems - its like disabled shadow detection in local scopes


what i don't understand - why on earth should someone want to shadow 
a(or better any) variable at all?


Re: std.math performance (SSE vs. real)

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:

On Fri, 2014-06-27 at 11:10 +, John Colvin via Digitalmars-d wrote:
[
]

I understand why the current situation exists. In 2000 x87 was
the standard and the 80bit precision came for free.


Real programmers have been using 128-bit floating point for decades. All
this namby-pamby 80-bit stuff is just an aberration and should never
have happened.


what consumer hardware and compiler supports 128-bit floating points?



Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 22:38, schrieb Tofu Ninja:

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:

what i don't understand - why on earth should someone want to
shadow a(or better any) variable at all?


It can be useful if you are using mixins where you don't know
what is going to be in the destination scope.



can be usefull in a even more hard to understand situation makes it no 
better


Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 20:09, schrieb Kapps:

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:

Am 27.06.2014 10:20, schrieb dennis luehring:

I

think we hit the sweet spot at restricting shadowing detection
to local scopes.


sweet does not mean - use a better name or .x to avoid manualy
hard to detect problems - its like disabled shadow detection in
local scopes

what i don't understand - why on earth should someone want to
shadow a(or better any) variable at all?


struct Foo {
   int a;
   this(int a) {
   this.a = a;
   }
}



forgot that case - but i don't like how its currently handled, maybe no 
better way - its just not perfect :)


Re: export keyword

2014-06-24 Thread dennis luehring via Digitalmars-d

Am 24.06.2014 11:34, schrieb seany: Also, while we are at it,

 does d support declarations like:

 class C {

 public :

 int a;
 string b;
 double c;

 }

read the manual first http://dlang.org/class

 and could I as well write

 class C2{

 auto x

 this(T)(T y)
 {
  this.x = y;
 }

 }

would not make sense at all

you would then also need methods that can auto-magicaly work with your x 
- that ist not (clean) be possible


maybe a template+interface could help - but it seems that your 
implementation ideas are a little strange - give an example of what you 
try to reach


Re: ANTLR grammar for D?

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 08:57, schrieb Wesley Hamilton:

I've started making a D grammar for ANTLR4, but I didn't want to
spend days testing and debugging it later if somebody already has
one.

The best search results turn up posts that are 10 years old. Only
one post has a link to a grammar file and the page seems to have
been removed. I also assume it would be obsolete with changes to
ANTLR and D.
http://www.digitalmars.com/d/archives/digitalmars/D/25302.html
http://www.digitalmars.com/d/archives/digitalmars/D/4953.html



most uptodate seems to be https://github.com/Hackerpilot/DGrammar


Re: Perlin noise benchmark speed

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 14:32, schrieb Nick Treleaven:

Hi,
A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr

It apparently shows the 3 main D compilers producing slower code than
Go, Rust, gcc, clang, Nimrod:

https://github.com/nsf/pnoise#readme

I initially wondered about std.random, but got this response:

Yeah, but std.random is not used in that benchmark, it just initializes
256 random vectors and permutates 256 sequential integers. What spins in
a loop is just plain FP math and array read/writes. I'm sure it can be
done faster, maybe D compilers are bad at automatic inlining or something. 

Obviously this is only one person's benchmark, but I wondered if people
would like to check their code and suggest reasons for the speed deficit.



write, printf etc. performance is benchmarked also - so not clear
if pnoise is super-fast but write is super-slow etc...


Re: Perlin noise benchmark speed

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 15:14, schrieb dennis luehring:

Am 20.06.2014 14:32, schrieb Nick Treleaven:

Hi,
A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr

It apparently shows the 3 main D compilers producing slower code than
Go, Rust, gcc, clang, Nimrod:

https://github.com/nsf/pnoise#readme

I initially wondered about std.random, but got this response:

Yeah, but std.random is not used in that benchmark, it just initializes
256 random vectors and permutates 256 sequential integers. What spins in
a loop is just plain FP math and array read/writes. I'm sure it can be
done faster, maybe D compilers are bad at automatic inlining or something. 

Obviously this is only one person's benchmark, but I wondered if people
would like to check their code and suggest reasons for the speed deficit.



write, printf etc. performance is benchmarked also - so not clear
if pnoise is super-fast but write is super-slow etc...



using perf with 10 is maybe too small to give good avarge result infos
and also runtime startup etc. is measured - it not clear what is slower

these benchmarks should be seperated into 3 parts

runtime-startup
pure pnoise
result output - needed only once for verification, return dummy output 
will fit better to test the pnoise speed


are array bounds checks active?


Re: Perlin noise benchmark speed

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 17:09, schrieb bearophile:

Nick Treleaven:


A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr


This should be compiled with LDC2, it's more idiomatic and a
little faster than the original D version:
http://dpaste.dzfl.pl/8d2ff04b62d3

I have already seen that if I inline Noise2DContext.get in the
main manually the program gets faster (but not yet fast enough).

Bye,
bearophile



it does not makes sense to optmized this example more and more - it 
should be fast with the original version (except the missing finals on 
the virtuals)


Re: Perlin noise benchmark speed

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 22:44, schrieb bearophile:

dennis luehring:


it does not makes sense to optmized this example more and
more - it should be fast with the original version


But the original code is not fast. So someone has to find what's
broken. I have shown part of the broken parts to fix (floor on
ldc2).

Also, the original code is not written in a fully idiomatic way,
also because unfortunately today the lazy way to write D code
is not always the best/right way (example: you have to add ton of
immutable/const, and annotations, because immutability is not the
default), so a code fix is good.

Bye,
bearophile



as long as you find out its a library thing

the c version is without any annotations and immutable/const the fastest 
- so whats the problem with D here, it can't(shouln't) be that one needs 
to work/change that much on such simple code to reach c speed


Re: An LLVM bug that affect both LDC and SDC. Worth pushing for

2014-06-18 Thread dennis luehring via Digitalmars-d

Am 18.06.2014 23:22, schrieb Iain Buclaw via Digitalmars-d:

Likewise here.  But unless I'm missing something (I'm not sure what
magic happens with @allocate, for instance), I'm not sure how you
could expect the optimisation passes to squash closures together.

Am I correct in that it's asking for:
--
int *i = new int;
*i = 42;
return *i;


To be folded into:
--
return 42;


just to show what clang 3.5 svn and libc++ can currently optimize down

patches
clang: http://reviews.llvm.org/rL210137
libc++: http://reviews.llvm.org/rL210211

#example 1

#include vector
#include numeric
int main()
{
const std::vectorint a{1,2};
const std::vectorint b{4,5};
const std::vectorint ints
{
  std::accumulate(a.begin(),a.end(),1),
  std::accumulate(b.begin(),b.end(),2),
};
return std::accumulate(ints.begin(),ints.end(),100);
}

asm result:

main:   # @main
   movl  $115, %eax
   retq

#example 2

#include string
int main()
{
   return std::string(hello).size();
}

asm result:

main:   # @main
   movl $5, %eax
   retq

an older clang/libc++, gcc 4.9.x, and VS2013 producing much much (much) 
more asm code in these situations


Re: An LLVM bug that affect both LDC and SDC. Worth pushing for

2014-06-18 Thread dennis luehring via Digitalmars-d

Am 19.06.2014 07:16, schrieb deadalnix:

If they go for clang specific solution, that aren't gonna cut it
for us :(



only as an orientation what weaker language + optimizer can reach :)


Re: A Perspective on D from game industry

2014-06-17 Thread dennis luehring via Digitalmars-d

Am 17.06.2014 11:30, schrieb Walter Bright:

And how would you syntax-highlight a string mixin that's assembled from
arbitrary string fragments?


You wouldn't need to, since the text editor sees only normal D code.



the text editor sees just D-code-Strings - so no syntax-highlight except 
that for Strings


Unicode 7.0.0 is out

2014-06-17 Thread dennis luehring via Digitalmars-d

http://www.unicode.org/versions/Unicode7.0.0/


Re: Unicode 7.0.0 is out

2014-06-17 Thread dennis luehring via Digitalmars-d

Am 17.06.2014 16:58, schrieb Dmitry Olshansky:

17-Jun-2014 17:43, dennis luehring пОшет:

http://www.unicode.org/versions/Unicode7.0.0/


OMG

The good news is we haven't implemented yet the collation algorithm,
so no need to re-implement it! :)

P.S. Seriously we should be good to go, with a minor semi-automated
update to std.uni tables.



seems to be not ultra simple

http://www.unicode.org/reports/tr10/


Re: What's going on with std.experimental.lexer?

2014-06-13 Thread dennis luehring via Digitalmars-d

Am 13.06.2014 16:59, schrieb Dejan Lekic:

Please no.  See: javax

Spelling out 'experimental' is probably the best, for all those
reasons
already stated.


What's wrong with javax?



experimental is 100% clear and simple to understand beeing evil

javax was interpreted as eXtendet or eXtra or whatever, so people have
no problem in using experimental stuff all the while


Re: DConf 2014 Day 1 Talk 4: Inside the Regular Expressions in D by Dmitry Olshansky

2014-06-12 Thread dennis luehring via Digitalmars-d-announce

Am 12.06.2014 11:17, schrieb Dmitry Olshansky:

This one thing I'm loosing sleep over - what precisely is so good in
CTFE code generation in_practical_  context (DSL that is quite stable,
not just tiny helpers)?

By the end of day it's just about having to write a trivial line in your
favorite build system (NOT make) vs having to wait for a couple of
minutes each build hoping the compiler won't hit your system's memory
limits.

And these couple of minutes are more like 30 minutes at a times. Worse
yet unlike proper build system it doesn't keep track of actual changes
(same regex patterns get recompiled over and over), at this point
seamless integration into the language starts felling like a joke.

And speaking of seamless integration: just generate a symbol name out of
pattern at CTFE to link to later, at least this much can be done
relatively fast. And voila even the clunky run-time generation is not
half-bad at integration.

Unless things improve dramatically CTFE code generation + mixin is just
our funny painful toy.


you should write a big top post about your CTFE experience/problems - it 
is important enough


Re: What's going on with std.experimental.lexer?

2014-06-09 Thread dennis luehring via Digitalmars-d

Am 09.06.2014 22:21, schrieb Brian Schott:

On Friday, 6 June 2014 at 23:50:40 UTC, Brian Schott wrote:

SIMD reduces execution time by 5.15% with DMD.
Compiling the non-SIMD code with GDC reduces execution time by
42.39%.

So... There's that.


Changing the code generator to output a set of if statements that
implements a binary search did more-or-less nothing with the DMD
timings, but brought GDC's lead up to 49%. (i.e. the GDC-compiled
version executes in 51% of the time that the DMD-compiled version
does)



the LLVM optimizer is also very good (sometimes far better then the gcc 
one) - what are your LDC timings?


Re: What's going on with std.experimental.lexer?

2014-06-08 Thread dennis luehring via Digitalmars-d

Am 07.06.2014 01:50, schrieb Brian Schott:

On Friday, 6 June 2014 at 00:33:23 UTC, Brian Schott wrote:

Implementing some SIMD code just in the lexWhitespace function
causes a drop in total lexing time of roughly 3.7%. This looks
promising so far, so I'm going to implement similar code in
lexStringLiteral, lexSlashStarComment, lexSlashSlashComment,
and lexSlashPlusComment.


Some moe numbers:

SIMD reduces execution time by 5.15% with DMD.
Compiling the non-SIMD code with GDC reduces execution time by
42.39%.

So... There's that.



thats why im always puzzled when people start to optimze algorithms 
based on DMD results - currently one should always compare any results 
before optimization with GDC/LDC


Re: [OT] C++ the Clear Winner In Google's Language Performance Tests

2014-06-06 Thread dennis luehring via Digitalmars-d

Am 06.06.2014 16:34, schrieb Dejan Lekic:

Slashdot thread:
http://developers.slashdot.org/story/11/06/15/0242237/c-the-clear-winner-in-googles-language-performance-tests

Research paper:
https://days2011.scala-lang.org/sites/days2011/files/ws3-1-Hundt.pdf

I wonder what would be situation if they included D, Rust and
even Ur in that benchmark... :)


or retest now - 3 years later :)




Re: Interview at Lang.NEXT

2014-06-05 Thread dennis luehring via Digitalmars-d-announce

Am 05.06.2014 11:42, schrieb Jonathan M Davis via Digitalmars-d-announce:

if(cond)
 var = hello world;
else
 var = 42;

The fact that an if statement could change the type of a variable is just
atrocious IMHO. Maybe I've just spent too much of my time in statically typed
languages, but I just do not understand the draw that dynamically typed
languages have for some people. They seem to think that avoiding a few simple
things that you have to do in your typical statically typed language is
somehow a huge improvement when it causes them so many serious problems that
static languages just don't have.


maybe some sort of misunderstanded generic style
of programming in prior D times :)



Re: Interview at Lang.NEXT

2014-06-04 Thread dennis luehring via Digitalmars-d-announce

Am 04.06.2014 19:57, schrieb Meta:

On Wednesday, 4 June 2014 at 17:55:15 UTC, bearophile wrote:

How many good usages of D Variant do you know?

Bye,
bearophile


It depends on what you mean by a good usage. I rarely ever use
Variant, but you *can* use it if you need weak and/or dynamic
typing.



but D+Variant is still far away from an untyped language - because
everything needs to be based on Variant - every signature... so
it isn't an ~correct~ solution in this context


port C++ to D - copy constness

2014-06-02 Thread dennis luehring via Digitalmars-d-learn
i want to port this C++ code to good/clean D and have no real idea how 
to start


contains 2 templates - a slice like and a binary reader for an slice
main idea was to copy the immutablity of the slice data to the reader

http://pastebin.com/XX2yhm8D

the example compiles fine with http://gcc.godbolt.org/, clang version 
3.4.1 and compiler-options: -O2 -std=c++11


the slice_T template - could be maybe reduce down to an normal D slice
but i want to control the slice (im)mutability - so maybe there is still 
a need for the slice_T thing


i don't know if the binary reader read_ref method should be written 
totaly different in D


any tips, ideas?





Re: port C++ to D - copy constness

2014-06-02 Thread dennis luehring via Digitalmars-d-learn

Am 02.06.2014 12:09, schrieb Timon Gehr:

On 06/02/2014 09:06 AM, dennis luehring wrote:

i want to port this C++ code to good/clean D and have no real idea how
to start

contains 2 templates - a slice like and a binary reader for an slice
main idea was to copy the immutablity of the slice data to the reader

http://pastebin.com/XX2yhm8D

the example compiles fine with http://gcc.godbolt.org/, clang version
3.4.1 and compiler-options: -O2 -std=c++11

the slice_T template - could be maybe reduce down to an normal D slice
but i want to control the slice (im)mutability - so maybe there is still
a need for the slice_T thing

i don't know if the binary reader read_ref method should be written
totaly different in D

any tips, ideas?



If the following is not already what you were looking for, it should get
you started. (But note that the interface provided by BinaryReader is
unsafe: It may invent pointers. You might want to add template
constraints that would at least allow the implementation to be @trusted.)

template CopyQualifiers(S,T){
  import std.traits;
  static if(is(S==const)) alias T1=const(Unqual!T);
  else alias T1=Unqual!T;
  static if(is(S==immutable)) alias T2=immutable(T1);
  else alias T2=T1;
  static if(is(S==inout)) alias T3=inout(T2);
  else alias T3=T2;
  static if(is(S==shared)) alias CopyQualifiers=shared(T3);
  else alias CopyQualifiers=T3;
}

struct BinaryReader(T){
  @disable this();
  this(T[] slice){ this.slice=slice; }
  size_t left()const{ return slice.length - offset; }
  bool enoughSpaceLeft(size_t size)const{ return size = left(); }
  ref readRef(V)(){
  if(!enoughSpaceLeft(V.sizeof)) throw new Exception(1);
  auto off=offset;
  offset+=V.sizeof;
  return *cast(CopyQualifiers!(T,V)*)(slice.ptr+off);
  }
  auto readValue(V)(){ return readRef!V(); }
private:
  T[] slice;
  size_t offset=0;
}

auto binaryReader(T)(T[] slice){ return BinaryReader!T(slice); }

void main(){
  import std.stdio;
  try{
  auto testData = THIS IS BINARY TEST DATA; // no comment
  auto stream = binaryReader(testData);
  static assert(is(typeof(stream.readRef!uint())==immutable));
  (ref ref_){
  auto value = stream.readValue!uint();
  }(stream.readRef!uint());
  }catch(Exception e){
  writeln(exception error: ,e.msg);
  }catch{
  writeln(exception unknown);
  }
}




seems to be a good start - how would you implement such slice/reader 
thing in idiomatic D style - the same?






Re: Performance

2014-05-31 Thread dennis luehring via Digitalmars-d

Am 31.05.2014 08:36, schrieb Russel Winder via Digitalmars-d:

As well as the average (mean), you must provide standard deviation and
degrees of freedom so that a proper error analysis and t-tests are
feasible.


average means average of benchmarked times

and the dummy values are only for keeping the compiler from removing
anything it can reduce at compiletime - that makes benchmarks 
compareable, these values does not change the algorithm or result 
quality an any way - its more like an overflowing-second-output bases on 
the result of the original algorithm (but should be just a simple 
addition or substraction - ignoring overflow etc.)


thats the base of all types of non-stupid benchmarking - next/pro step 
is to look at the resulting assemblercode




Re: Performance

2014-05-31 Thread dennis luehring via Digitalmars-d

Am 31.05.2014 13:25, schrieb dennis luehring:

Am 31.05.2014 08:36, schrieb Russel Winder via Digitalmars-d:

As well as the average (mean), you must provide standard deviation and
degrees of freedom so that a proper error analysis and t-tests are
feasible.


average means average of benchmarked times

and the dummy values are only for keeping the compiler from removing
anything it can reduce at compiletime - that makes benchmarks
compareable, these values does not change the algorithm or result
quality an any way - its more like an overflowing-second-output bases on
the result of the original algorithm (but should be just a simple
addition or substraction - ignoring overflow etc.)

thats the base of all types of non-stupid benchmarking - next/pro step
is to look at the resulting assemblercode



so the anti-optimizer-overflowing-second-output aka AOOSO should be

initialized outside of the testfunction with an random-value - i normaly 
use the pointer to the main args as int


the AOOSO should be incremented by the needed result of the benchmarked
algorithm - that could be an int casted float/double value, the variant 
size of an string or whatever is floaty and needed enough to be used


and then return the AOOSO as main return

so the original algorithm isn't changed but the compiler got absolutely 
nothing to prevent the usage and the end output of this AOOSO dummy value


yes it ignores that the code-size (cache problems) is changed by the 
AOOSO incrementation - thats the reason for simple casting/overflowing 
integer stuff here, but if the benchmarking goes that deep you should 
better take a look at the assembler-level





Re: Performance

2014-05-30 Thread dennis luehring via Digitalmars-d

faulty benchmark

-do not benchmark format

-use a dummy-var - just add(overflow is not a problem) your plus() 
results to it and return that in your main - preventing dead code 
optimization in any way


-introduce some sort of random-value into your plus() code, for example
use an random-generator or the int-casted pointer to program args as 
startup value


-do not benchmark anything without millions of loops - use the average 
as the result


anything else does not makes sense

Am 30.05.2014 15:35, schrieb Thomas:

I made the following performance test, which adds 10^9 Double’s
on Linux with the latest dmd compiler in the Eclipse IDE and with
the Gdc-Compiler also on Linux. Then the same test was done with
C++ on Linux and with Scala in the Java ecosystem on Linux. All
the testing was done on the same PC.
The results for one addition are:

D-DMD: 3.1 nanoseconds
D-GDC: 3.8 nanoseconds
C++: 1.0 nanoseconds
Scala: 1.0 nanoseconds


D-Source:

import std.stdio;
import std.datetime;
import std.string;
import core.time;


void main() {
run!(plus)( 1000*1000*1000 );
}

class C {
}

string plus( int steps  )  {
double sum = 1.346346;
immutable double p0 = 0.0045;
immutable double p1 = 1.00045452-p0;
auto b = true;
for( int i=0; isteps; i++){
switch( b ){
case true :
  sum += p0;
  break;
default:
  sum += p1;
  break;
}
b = !b; 
}
return (format(%s  %f,plus\nLast: , sum) );
//  return (plus\nLast: , sum );
}


void run( alias func )( int steps )
if( is(typeof(func(steps)) == string)) {
auto begin = Clock.currStdTime();
string output = func( steps );
auto end =  Clock.currStdTime();
double nanotime = toNanos(end-begin)/steps;
writeln( output );
writeln( Time per op:  , nanotime );
writeln( );
}

double toNanos( long hns ) { return hns*100.0; }


Compiler settings for D:

dmd -c
-of.dub/build/application-release-nobounds-linux.posix-x86-dmd-DF74188E055ED2E8ADD9C152107A632F/first.o
-release -inline -noboundscheck -O -w -version=Have_first
-Isource source/perf/testperf.d

gdc ./source/perf/testperf.d -frelease -o testperf

So what is the problem ? Are the compiler switches wrong ? Or is
D on the used compilers so slow ? Can you help me.


Thomas






Re: Scott Meyers' DConf 2014 keynote The Last Thing D Needs

2014-05-28 Thread dennis luehring via Digitalmars-d-announce

woudl be nice to have some sort of example by example comparison
or as an extension to the page http://dlang.org/cpptod.html

Am 28.05.2014 07:40, schrieb Jesse Phillips:

On Wednesday, 28 May 2014 at 05:30:18 UTC, Philippe Sigaud via
Digitalmars-d-announce wrote:

I did a translation of most of the code in the slides.

http://dpaste.dzfl.pl/72b5cfcb72e4

I'm planning to transform it into blog post (or series). Right
now it just
has some scratch notes. Feel free to let me know everything I
got wrong.


That's a good idea. I think most of us did that while listening
to the
talk. I kept telling myself: 'oh wait, that'd simpler in D' or
'that
does not exist in D'.

As for the class inheritance problem, I'd also be interested in
an answer.


When he explained why C++ inferred a const int type as int, he
tripped me up because D does drop const for value types. But D
does the simple to explain thing, may not be the expected thing
(seen questions about it in D.learn), but it is simple to explain.





OT: but maybe still interesting - runtime c++ compilation

2014-05-28 Thread dennis luehring via Digitalmars-d
could be a nice next step for D if D compiler as a library comes 
someday available


http://runtimecompiledcplusplus.blogspot.co.uk/

from the blog:

Runtime Compiled C++ is in Kythera, the AI behind Star Citizen.

Video: RCC++ at the 2012 Develop Conference
http://vimeo.com/85934969

and it seems that the Crytek and Unreal Guys are also playing in this field



Re: OT: but maybe still interesting - runtime c++ compilation

2014-05-28 Thread dennis luehring via Digitalmars-d

example from a simulation project

a runtime-configured (0-6)-axis cinematics system able to bring axis to 
an given position or calculates back to axis positions for simulation 
purpose


so the interface is target/current-reached-position and axis.positions

currently done by using virtuals, can maybe optimized further - but 
definitily not at compile-time




Re: OT: but maybe still interesting - runtime c++ compilation

2014-05-28 Thread dennis luehring via Digitalmars-d

Am 28.05.2014 16:32, schrieb Byron Heads:

Would love to have this for vibe.d


for what use case?


Re: DFL is really cool,Who can contact Christopher E. Miller?

2014-05-14 Thread dennis luehring via Digitalmars-d

Am 15.05.2014 05:58, schrieb FrankLike:

1.DFL's Memory Usage is the least than other. winsamp.exe is
2.1M,DFL's example's exe is 2.7M.
2.The size of DFL's example's exe files is the least than other,
and only a single file.
3.DFL's source code is the most easy to understand.

D need Christopher E. Miller.



and what should happen then? he seems to lost interest long time ago
and there are some forks of the project on github - so why do D need 
Christopher E. Miller in person?


Re: Cost of assoc array?

2014-05-14 Thread dennis luehring via Digitalmars-d-learn

Am 14.05.2014 12:33, schrieb Chris:

On Wednesday, 14 May 2014 at 10:20:51 UTC, bearophile wrote:

Chris:


Is there any huge difference as regards performance and memory
footprint between the two? Or is 2. basically 1. under the
hood?


An associative array is a rather more complex data structure,
so if you don't need it, use something simpler. There is
difference in both the amount of memory used and performance
(in many cases such difference doesn't matter). In D there are
also differences in the way they are initialized from a null or
fat null.

Bye,
bearophile


Thanks. Do you mean the difference is negligible in many cases?
I'm not sure, because changing it would be a breaking change in
the old code, meaning I would have to change various methods.


a simple array would be faster because no access is generate for your 
key - just plain access, just don't use assoc arrays if you don't need 
key based access


read more manuals about hashmaps and stuff and how to do benchmarking - 
helps alot in the future





Re: Cost of assoc array?

2014-05-14 Thread dennis luehring via Digitalmars-d-learn

Am 14.05.2014 15:20, schrieb Chris:

Profiling is not really feasible, because for this to work
properly, I would have to introduce the change first to be able
to compare both. Nothing worse than carefully changing things
only to find out, it doesn't really speed up things.


why not using an alias for easier switch between the versions?

alias string[][size_t] my_array_type

or

alias string[][] my_array_type

an do an searchreplace of string[][size_t] with my_array_type thats 
it - still too hard :)








Re: Challenge: write a really really small front() for UTF8

2014-03-25 Thread dennis luehring

Am 24.03.2014 17:44, schrieb Andrei Alexandrescu:

On 3/24/14, 5:51 AM, w0rp wrote:

On Monday, 24 March 2014 at 09:02:19 UTC, monarch_dodra wrote:

On Sunday, 23 March 2014 at 21:23:18 UTC, Andrei Alexandrescu wrote:

Here's a baseline: http://goo.gl/91vIGc. Destroy!

Andrei


Before we roll this out, could we discuss a strategy/guideline in
regards to detecting and handling invalid UTF sequences?

Having a fast front is fine and all, but if it means your program
asserting in release (or worst, silently corrupting memory) just
because the client was trying to read a bad text file, I'm unsure this
is acceptable.


I would strongly advise to at least offer an option


Options are fine for functions etc. But front would need to find an
all-around good compromise between speed and correctness.

Andrei



b\255.decode(utf-8, errors=strict) # UnicodeDecodeError
b\255.decode(utf-8, errors=replace) # replacement character used
b\255.decode(utf-8, errors=ignore) # Empty string, invalid
sequence removed.

i think there should be a base range for UTF8 iteration - with policy 
based error extension (like in python) and some variants that defer this 
base UTF8 range with different error behavior - and one of these become 
the phobos standard = default parameter so its still switchable





Re: Challenge: write a really really small front() for UTF8

2014-03-25 Thread dennis luehring

Am 25.03.2014 11:38, schrieb Nick Sabalausky:

On 3/25/2014 4:00 AM, Iain Buclaw wrote:

On 25 March 2014 00:04, Daniel N u...@orbiting.us wrote:

On Monday, 24 March 2014 at 12:21:55 UTC, Daniel N wrote:


I'm currently too busy to submit a complete solution, but please feel free
to use my idea if you think it sounds promising.



I now managed to dig up my old C source... but I'm still blocked by dmd not
accepting the 'pext' instruction...

1) I know my solution is not directly comparable to the rest in this
thread(for many reasons).
2) It's of course trivial to add a fast path for ascii... if desired.
3) It throws safety and standards out the window.




4) It's tied to one piece of hardware.

No Thankee.

void doStuff() {
  if(supportCpuFeatureX)
  doStuff_FeatureX();
  else
  doStuff_Fallback();
}

   dmd -inline blah.d


the extra branch could kill the performance benefit if doStuff is too small


Re: Challenge: write a really really small front() for UTF8

2014-03-24 Thread dennis luehring

Am 24.03.2014 10:02, schrieb monarch_dodra:

Having a fast front is fine and all, but if it means your
program asserting in release (or worst, silently corrupting
memory) just because the client was trying to read a bad text
file, I'm unsure this is acceptable.


it would be great to habe a basic form of this range that error 
behavior could be extended policy based / templates - the phobos version 
could extend the basic range with prefered error behavior - but the 
basic range is still able to read without checking - for example if i 
know that my input is 100% valid and i need the speed etc.





Re: Challenge: write a really really small front() for UTF8

2014-03-24 Thread dennis luehring

Am 24.03.2014 13:51, schrieb w0rp:

On Monday, 24 March 2014 at 09:02:19 UTC, monarch_dodra wrote:

On Sunday, 23 March 2014 at 21:23:18 UTC, Andrei Alexandrescu
wrote:

Here's a baseline: http://goo.gl/91vIGc. Destroy!

Andrei


Before we roll this out, could we discuss a strategy/guideline
in regards to detecting and handling invalid UTF sequences?

Having a fast front is fine and all, but if it means your
program asserting in release (or worst, silently corrupting
memory) just because the client was trying to read a bad text
file, I'm unsure this is acceptable.


I would strongly advise to at least offer an option, possibly via
a template parameter, for turning error handling on or off,
similar to how Python handles decoding. Examples below in Python
3.

b\255.decode(utf-8, errors=strict) # UnicodeDecodeError
b\255.decode(utf-8, errors=replace) # replacement character
used
b\255.decode(utf-8, errors=ignore) # Empty string, invalid
sequence removed.

All three strategies are useful from time to time. I mainly reach
for option three when I'm trying to get some text data out of
some old broken databases or similar.

We may consider leaving the error checking on in -release for the
'strict' decoding, but throwing an Error instead of an exception
so the function can be nothrow. This would prevent memory
corruption in release code. assert vs throw Error is up for
debate.



+1


Re: [OT] Sony is making their Playstation C# tools open source

2014-03-11 Thread dennis luehring

Am 11.03.2014 10:38, schrieb Paulo Pinto:

Hi,

since game development discussions tend to come up here, Sony is
making their C# tools open source, used in games by Naughty Dog,
Guerrilla Games and others.

https://github.com/SonyWWS/ATF

One good example how GC based languages do not hinder game
development and are gaining place alongside C++ as part of the
development process.


still sony works on its own llvm/clang based c++ compiler for PS4 :)




Re: Interface to Microsoft Access database (Jet)

2014-03-11 Thread dennis luehring

Am 11.03.2014 08:37, schrieb Orfeo:

I should extract and process data from Microsoft Access database,
and
mdbtools  is not enough.
Is there a library that I can use to query Access?
Thanks



mdbtools is not enough

what is not enough?

what version of mdbtools do you use

https://github.com/brianb/mdbtools




Re: Interface to Microsoft Access database (Jet)

2014-03-11 Thread dennis luehring

Am 11.03.2014 09:06, schrieb Orfeo:

Thank you for github link, I had tried only with mdbtools on
http://mdbtools.sourceforge.net/...



So, it seems that I can connect using libmdb or odbc ...


so everything is fine? or I can not


have you any suggestions?


answer the question

what do you need?




Re: Major performance problem with std.array.front()

2014-03-10 Thread dennis luehring

Am 07.03.2014 03:37, schrieb Walter Bright:

In Lots of low hanging fruit in Phobos the issue came up about the automatic
encoding and decoding of char ranges.


after reading many of the attached posts the question is - what
could be Ds future design of introducing breaking changes, its
not a solution to say its not possible because of too many breaking 
changes - that will become more and more a problem of Ds evolution

- much like C++



OT: clang guys postet MSVC compatibility info

2014-03-02 Thread dennis luehring

http://clang.llvm.org/docs/MSVCCompatibility.html


Re: OT: clang guys postet MSVC compatibility info

2014-03-02 Thread dennis luehring

Am 02.03.2014 14:45, schrieb Asman01:

On Sunday, 2 March 2014 at 09:47:15 UTC, dennis luehring wrote:

http://clang.llvm.org/docs/MSVCCompatibility.html


It's a true gcc replacement. I've hear the guys of clang are
creating compiler for Microsoft languages too.



 I've hear the guys of clang are
 creating compiler for Microsoft languages too.

what are Microsoft languageS?
clang is a C/C++ compiler with microsoft extensions/incorrectness support

what other languages?


Re: OT: clang guys postet MSVC compatibility info

2014-03-02 Thread dennis luehring

Am 02.03.2014 18:34, schrieb Remo:

The only one the are in common with C
is C# but it is managed language.


C# and C have nearly nothing in common, except the C in both names


Re: DIP56 Provide pragma to control function inlining

2014-02-23 Thread dennis luehring

Am 23.02.2014 13:38, schrieb Dmitry Olshansky:

23-Feb-2014 16:07, Walter Bright пОшет:

http://wiki.dlang.org/DIP56

Manu has needed always inlining, and I've needed never inlining. This
DIP proposes a simple solution.


Why pragma? Also how exactly it is supposed to work:

pragma(inline, true);
... //every declaration that follows is forcibly inlined?
pragma(inline, false);
... //every declaration that follows is forcibly NOT inlined?

How to return to normal state then? I think pragma is not attached to
declaration.

I'd strongly favor introducing a compiler-hint family of UDAs and
force_inline/force_notinline as first among many.


yea it feels strange - like naked in inline asm
its a scope changer - that sits inside the scope it changes???

like writing public methods by putting public inside of the method - and 
public is also compiler relevant for the generated interface


and aligne is also not a pragma - and still changes codegeneration

its a function-(compile-)attribute but that does not mean it have to
be a pragma

btw: is the pragma way just easier to implement - or else i don't 
understand why this is handle so special?




Re: Phobos for Review: std.buffer.scopebuffer

2014-02-07 Thread dennis luehring

Am 07.02.2014 10:13, schrieb Walter Bright:

1. It's set up to fit into two registers on 64 bit code. This means it can be
passed/returned from functions in registers. When I used this in my own code,
high speed was the top priority, and this made a difference.


did you also test the codegen of ldc and gdc - or is this optimization
only working for dmd?



Re: Idea #1 on integrating RC with GC

2014-02-06 Thread dennis luehring

Am 06.02.2014 11:21, schrieb Ola Fosheim Grøstad

Not having the source code to a library is rather risky in terms
of having to work around bugs by trail and error, without even
knowing what the bug actually is.


so you're not work in the normal software development business where
non-source code third party dependicies are fully normal :)



Re: Idea #1 on integrating RC with GC

2014-02-06 Thread dennis luehring
Am 06.02.2014 11:43, schrieb Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com:

On Thursday, 6 February 2014 at 10:29:20 UTC, dennis luehring
wrote:

so you're not work in the normal software development business
where
non-source code third party dependicies are fully normal :)


Not in terms of libraries no, what libraries would that be?


all libraries that other deparments don't want you to see or you don't
need to see - so unnormal in your environment?




Re: New debugger for D!!!

2014-01-28 Thread dennis luehring

Am 28.01.2014 17:24, schrieb dennis luehring:

Am 28.01.2014 17:16, schrieb Sarath Kodali:

I expect this is how it will be even with dbg and IDEs. The IDE
will have a plugin that sits between the debugger and IDE. The
communication between the IDE plugin and debugger will be over a
socket and the dbg output will be in JSON format so that IDE
plugin can parse it properly. Depending on the IDE, the plugin
will be either a library (dll) or an independent executable.


its a little bit different to pin because pin is the host
of the given tool-communication dll - so the dll interface is the
interface not JSON

(pin(tool dll--)-- any protocol -- ide/whatever

in your idea

dbg --- socker/JASON -- ide/whatever

the question is - debuggers needs to throw masses
of information around - why put a slow JSON parser between, every single
step command gets JSONed, parsed, singlestep, result gets JSOned etc...
millions of times - why?



i would suggest an real tool api for loaded protocol-drivers - like pin
do - and implement a control_dbg_with_tcp_and_json.dll as a driver
this way its still possible to build a super fast tracing server on top
of dbg - else JSON will become a problem - without any need because
the same target is reachable with a driver-dll(plugin)






Re: New debugger for D!!!

2014-01-28 Thread dennis luehring

Am 28.01.2014 17:16, schrieb Sarath Kodali:

I expect this is how it will be even with dbg and IDEs. The IDE
will have a plugin that sits between the debugger and IDE. The
communication between the IDE plugin and debugger will be over a
socket and the dbg output will be in JSON format so that IDE
plugin can parse it properly. Depending on the IDE, the plugin
will be either a library (dll) or an independent executable.


its a little bit different to pin because pin is the host
of the given tool-communication dll - so the dll interface is the 
interface not JSON


(pin(tool dll--)-- any protocol -- ide/whatever

in your idea

dbg --- socker/JASON -- ide/whatever

the question is - debuggers needs to throw masses
of information around - why put a slow JSON parser between, every single 
step command gets JSONed, parsed, singlestep, result gets JSOned etc... 
millions of times - why?




Re: New debugger for D!!!

2014-01-28 Thread dennis luehring

Am 28.01.2014 18:23, schrieb Sarath Kodali:

the question is - debuggers needs to throw masses
of information around - why put a slow JSON parser between,
every single step command gets JSONed, parsed, singlestep,
result gets JSOned etc... millions of times - why?

I'm not fixated on JSON:)  I thought that is more popular
now-a-days:). Today dbg outputs in human readable format. After
the alpha release, I will add the machine readable format - what
everyone prefers.


clear - i would just use a plugin system for adding json or whatever
communication (like pin do) so its still api based - not (text/binary) 
protocol based from the very beginning


Re: New debugger for D!!!

2014-01-27 Thread dennis luehring

Am 28.01.2014 04:00, schrieb Sarath Kodali:

I'm also
planning to add a JSON or CSV output format so that it will be
easy to parse the output when integrating with IDEs. So I would
recommend that you wait till I release 1.0 version - sometime
before Dconf 2014 - hopefully!


why don't ease the IDE integration even more - for example
the pin tool from intel (ptools.org) is a normal executable (the server)
but you can give pin a tool/commander dll per commandline which is then 
responsible for controling the debugger - this way its very easy to 
integrate the debugger into any environment fast and performant


examples

pin.exe -t idadbg.dll - starts pin with an IDA-tool-dll to be able to
control pin with the ida debugger

pin.exe -t vsdbg.dll - starts pin with an vs-studio debug helper
this way you can use pin as an debugger for VStudio

etc.

csv and json is nice - but there a much nicer ways of doing ipc






Re: TLF = thread local functions

2014-01-23 Thread dennis luehring

Am 23.01.2014 15:44, schrieb Frustrated:

So, TLS solves the data issue with threading. I just thought,
with out much thinking, what about having thread local functions?
Doesn't make sense? Let me explain.

Functions generally are not thread safe because of reentry,
right? The same data is used by the function for each thread
calling it and sense threads could effectively call a function
while the same function is being executed by another thread, the
data is correct.


no - the parameters and local vars of the function are in the stack of 
the thread - so there is no problem with them, only shared variables can 
have a need to synchronization


your idea tries to solve non existing problems?



Re: dmd 2.065 beta 1

2014-01-19 Thread dennis luehring

Am 18.01.2014 15:13, schrieb Andrew Edwards:

On 1/18/14, 8:42 AM, Daniel Murphy wrote:

Andrew Edwards  wrote in message news:lbdumk$2oki$1...@digitalmars.com...

[1] ftp://ftp.digitalmars.com/dmd.2.065.beta.1.zip


Windows bin folder is empty.  I'd post on the list but I'm not sure it's
working at the moment.



Thanks. New file uploaded.


still not fully automated build down to zip file :)



Re: D benchmark code review

2013-12-13 Thread dennis luehring

Am 13.12.2013 14:56, schrieb logicchains:

I didn't realise D could do that; I've updated the code to use
that style of variable declaration. It's interesting to be able
to declare two arrays at once like so:


please revert - it looks terrible newbie like, no one except Rikki 
writes it like this


it makes absolutely no sense to pseudo-scope these enums and variable 
declarations, they do not share anything except the type AND that is not 
enough for pseudo-scope




Re: Inherent code performance advantages of D over C?

2013-12-12 Thread dennis luehring

Am 12.12.2013 21:16, schrieb Max Samukha:

On Thursday, 12 December 2013 at 20:06:37 UTC, Walter Bright
wrote:

On 12/12/2013 11:57 AM, Max Samukha wrote:

On Thursday, 12 December 2013 at 17:56:12 UTC, Walter Bright
wrote:


11. inline assembler being a part of the language rather than
an extension
that is in a markedly different format for every compiler


Ahem. If we admit that x86 is not the only ISA in exsistence,
then what is
(under)specified here http://dlang.org/iasm.html is a
platform-specific extension.


I know of at least 3 different C x86 inline assembler syntaxes.
This is not convenient, to say the least.


I know that too. I appreciate that you attempted to standardize
the asm for x86. But the question is what to do about other
targets? What about ARM, MIL, LLVM IR or whatever low-level
target a D compiler may compile too? Will those be standardized
as part of the language?


like freepascal got support for x86 and ARM inline asm (and others) for 
years?




shift right with ?

2013-11-30 Thread dennis luehring

http://rosettacode.org/wiki/Variable-length_quantity#D

ulong value = 0x100;
ubyte[] v = [0x7f  value];
for (ulong k = value  7; k  0; k = 7)
{
  v ~= (k  0x7f) | 0x80;
  if (v.length  9)
  {
throw new Exception(Too large value.);
  }
}
v.reverse();

while porting this to D i stumple accross the  - why is that 
allowed in D?


Re: Dynamic Sized Stack Frames

2013-11-18 Thread dennis luehring

Am 18.11.2013 08:31, schrieb Mike Parker:

On 11/18/2013 3:03 PM, Jonathan Marler wrote:

On Monday, 18 November 2013 at 05:58:25 UTC, Chris Cain wrote:



core.stdc.stdlib has alloca ... it allocates a runtime configurable
amount of memory on the stack. There's a few threads on doing some
tricks with it. For instance, if you use it as a default parameter,
then it'll allocate on the caller's stack so you can actually return a
ref to it from the function :)


Are you serious?  It seems the more I learn about D the more impressed I
become.  Thanks for the quick response :)


While D is awesome, alloca is actually a standard C function, which is
why it's in the core.stdc.stdlib package. You have full access to the C
standard library from D.



alloca is not in the C standard - but most compilers implement it - and 
its not a regular functions cause its reserves space on stack


Re: DIP 50 - AST macros

2013-11-14 Thread dennis luehring

Am 14.11.2013 09:40, schrieb Jacob Carlborg:

On 2013-11-14 08:54, dennis luehring wrote:


perfect for the DIP example section - more of these please :)


Done: http://wiki.dlang.org/DIP50#C.2B.2B_Namespaces_.28issue_7961.29



it would be also nice to have (always) an (if possible) string mixin 
and/or expression-template variant of the example around to show the 
dirtyness, give can't-work information very fast




Re: DIP 50 - AST macros

2013-11-14 Thread dennis luehring

Am 14.11.2013 09:53, schrieb Walter Bright:

On 11/13/2013 11:37 PM, Jacob Carlborg wrote:

On 2013-11-14 01:16, Walter Bright wrote:


Yes. But that's a good thing. I'd be pretty skeptical of the value of an
AST macro that took 3+4 and changed it so it did something other than
compute 7.


You can still do stupid things like that with operator overloading. Not on
built-in types, but on user defined types. Every language allows you to do
stupid things, one way or another.


Sure, but the issue is that expression templates are not for int+int, but for
T+int. My question is what value would there be in a rewrite of int+int to
mean something different.


int + int could be part of an PC-GPU kombination that generates 
CUDA- and D-interchange code at compiletime





Re: DIP 50 - AST macros

2013-11-14 Thread dennis luehring

Am 14.11.2013 10:06, schrieb dennis luehring:

Am 14.11.2013 09:53, schrieb Walter Bright:

On 11/13/2013 11:37 PM, Jacob Carlborg wrote:

On 2013-11-14 01:16, Walter Bright wrote:


Yes. But that's a good thing. I'd be pretty skeptical of the value of an
AST macro that took 3+4 and changed it so it did something other than
compute 7.


You can still do stupid things like that with operator overloading. Not on
built-in types, but on user defined types. Every language allows you to do
stupid things, one way or another.


Sure, but the issue is that expression templates are not for int+int, but for
T+int. My question is what value would there be in a rewrite of int+int to
mean something different.


int + int could be part of an PC-GPU kombination that generates
CUDA- and D-interchange code at compiletime


or like Don Blade Engine with string mixins

could produce high performance x87 asm code for normal D expressions



Re: DIP 50 - AST macros

2013-11-14 Thread dennis luehring

Am 14.11.2013 10:12, schrieb Walter Bright:

I've also seen the sheer awfulness of what happens with these systems over the
long term. The fascinating thing about this awfulness is people are well aware
of it in everyone's macro libraries but their own.


the same was said about templates AND string mixins before - but 
currently only the wise guys building good/great templates/mixins which 
are in use by everyone - no terror from the deep



I don't think the time has come for macros in D.


the DIP needs to grow by many examples and scenarios - some sort
of long term dicussion DIP




Re: DIP 50 - AST macros

2013-11-14 Thread dennis luehring

Am 14.11.2013 21:54, schrieb Walter Bright:

(A lot of the C++ macro abuse looks like normal syntax to the user, but it
behaves in a way that the casual user would find completely unexpected.


same can be said about Boost Spirit templates



Re: Disassembly Tool

2013-11-14 Thread dennis luehring

Am 14.11.2013 10:48, schrieb Namespace:

Since the disassembly on Dpaste doesn't work for me anymore, I'm
looking for an alternative. Is there one? And I don't want
obj2asm, I'm not willing to pay 15$.



maybe:

distorm:
http://www.ragestorm.net/distorm/

ida freeware:
https://www.hex-rays.com/products/ida/support/download_freeware.shtml
(32bit only)

agner fogs:
http://www.agner.org/optimize/#objconv


Re: DIP 50 - AST macros

2013-11-13 Thread dennis luehring

Am 13.11.2013 09:34, schrieb luka8088:

What about something like this?

class Person {

   macro where (Context context, Statement statement) {
 // ...
   }


it is not generic - and that is the biggest goal to reach



Re: DIP 50 - AST macros

2013-11-13 Thread dennis luehring

Am 13.11.2013 10:58, schrieb Regan Heath:

Linq is often confused with LinqToSQL, the above was a description of what
happens in the latter.  If Person was an object representing a table from
a SQL database, then calling 'where' on it would translate into an
IQueryablePerson object which when iterated upon using foreach would
execute the SQL statement shown above and return the resulting rows as
someperson objects one by one to the foreach body.

Pure Linq is a DSL, an alternate syntax, which looks a lot like SQL, which
can translate to SQL but is not limited to SQL.  It could translate to any
alternate database syntax, or simply access a container supporting the
required operations.


linq allows construction AND querying of non table-like hierarchical
data, so its more an (basetypes|object)-hierarchy store/retrival system 
which can also work with the lower just-tables-like world of sql results


Re: DIP 50 - AST macros

2013-11-13 Thread dennis luehring

Am 13.11.2013 11:41, schrieb dennis luehring:

Am 13.11.2013 10:58, schrieb Regan Heath:

Linq is often confused with LinqToSQL, the above was a description of what
happens in the latter.  If Person was an object representing a table from
a SQL database, then calling 'where' on it would translate into an
IQueryablePerson object which when iterated upon using foreach would
execute the SQL statement shown above and return the resulting rows as
someperson objects one by one to the foreach body.

Pure Linq is a DSL, an alternate syntax, which looks a lot like SQL, which
can translate to SQL but is not limited to SQL.  It could translate to any
alternate database syntax, or simply access a container supporting the
required operations.


linq allows construction AND querying of non table-like hierarchical
data, so its more an (basetypes|object)-hierarchy store/retrival system
which can also work with the lower just-tables-like world of sql results



but linq isn't good in graph operations like neo4j
http://www.neo4j.org/learn/cypher

which woulds be also very nice as an macro feature for in D reference 
traversing


Re: DIP 50 - AST macros

2013-11-13 Thread dennis luehring

Am 14.11.2013 08:50, schrieb Jacob Carlborg:

On 2013-11-14 02:56, deadalnix wrote:


I think the whole point of macro is to NOT add too much feature to the
language.

See for instance the example I gave before to create generator. This can
be added with extra language support (C# async/await is an example). But
with macro, the feature can be implemented as a library.

It is NOT achievable with expression templates.

I can say with certainty that the async/await mechanism is heavily used
in some codebases and with great benefit. It is being added to all
languages one after another.

The idea behind macros is that instead of adding new feature to the
language, the language should propose a toolkit to build these features
as library.


Agree, good points.


I don't think the time has come for macros in D. As discussed before, it
seems like filling the gap in existing feature is more important than
adding new ones. This should however be considered very seriously when
the gaps has been filled.


Unfortunately I don't think we will have a feature freeze from now until
those gaps has been filled. Most likely there will be added a couple of
small features that could have been solved by AST macros. The problem
with that is that each of these features, in themselves, are way to
small to add AST macros for. So instead we get new features and even
less chance of adding AST macros.

Example. There's a bug report about adding support for C++ namespaces.
That should be solveable with library code. It's just mangling and we
already have pragma(mangle).

pragma(mangle, namespace(foo::bar)) extern(C++) void x ();

The problem here is that x needs to be mangled as well.

pragma(mangle, namespace(foo::bar, x)) extern(C++) void x ();

The above doesn't work due to forward reference errors.

pragma(mangle, namespace(foo::bar, void function (), x)) extern(C++)
void x ();

The problem with the above is that we now need to repeat the signature
and the name of the function.

With AST macros:

macro namespace (Context context, Ast!(string) name, Declaration
declaration) { ... }

@namespace(foo::bar) extern (C++) void x ();

This works because the macro receives the whole declaration that is x
and x is replaced with whatever the macro returns:

pragma(mangle, mangled_name_of_x) extern (C++) void x ();

Since it uses the same syntax as UDA's you it can look like real
namespaces in C++:

@namespace(foo::bar) extern (C++)
{
  void x ();
  void y ();
}



perfect for the DIP example section - more of these please :)


Re: DIP 50 - AST macros

2013-11-12 Thread dennis luehring

Am 12.11.2013 17:39, schrieb Andrei Alexandrescu:

Maybe the problem needs to be reformulated for D. I think an SQL mixin
that either stays unchanged (for DB engines) or translates to a D
expression (for native D data types) would be doable, nontrivial,
interesting, and instantly usable for people who already know SQL
without any extra learning. In other words... actually better than Linq.


actually better than Linq good statement without knowing it deep

linq allows construction and querieng of non table-like hierarchical 
data, so its more an object-hierarchy store/retrival system which can 
also work with the lower just-tables-like world of sql results




Re: DIP 50 - AST macros

2013-11-11 Thread dennis luehring

One of our targets for AST macros should be the ability to
replicate roughly linq from c# / .net.

An example syntax for use with AST could be:

auto data = [5, 7, 9];
int[] data2;
query {
   from value in data
   where value = 6
   add to data2
}

Could be unwrapped to:

auto data = [5, 7, 9];
int[] data2;
foreach(value; data) {
   if (value = 6) data2 ~= value;
}


could you add the example to the DIP wiki page



Re: DIP 50 - AST macros

2013-11-11 Thread dennis luehring

Am 11.11.2013 09:28, schrieb simendsjo:

On Sunday, 10 November 2013 at 22:33:34 UTC, bearophile wrote:

Jacob Carlborg:


http://wiki.dlang.org/DIP50


I suggest to add some more use cases (possibly with their
implementation).

Bye,
bearophile


I agree examples would help a lot. Trying to define what
information actually exists within these types would also help a
lot.

In the first example, would Ast!(bool) be something like this?
opBinary!==
  left = opBinary!+
   left = Literal
 type = int
 value = 1
   right = Literal
 type = int
 value = 2
  right = Literal
type = int
value = 4

Would there be helpers for matching part of the structure?
The same applies to the other types used - what information
should they have?

As for examples, here's a couple of suggestions:
* Expression to prefix notation
* Expression to SQL
* AutoImplement properties (like C#)
* Number intervals, like int i = int[10..20]; where only 10 to
20 are legal values
* Discriminated union
* Pattern matching



can you add the example to the DIP?


Re: DIP 50 - AST macros

2013-11-11 Thread dennis luehring

Am 11.11.2013 13:36, schrieb Rikki Cattermole:

On Monday, 11 November 2013 at 12:30:07 UTC, Jacob Carlborg wrote:

Yes, I still don't understand why you would want it as a
pragma. Be usable outside of macros?


Yes outside of macros would be useful. For example code like this
would become redundant:
pragma(msg, Support for x is not implemented on platform y);
static assert(0);

Becoming:
pragma(error, Support for x is not implemented on platform y);

Also pragma's core responsibility is to cause the compiler to do
something. In this case to say we hit an error during compilation
please tell the user/dev and die.

It is a hook to the compilers workings.

Currently working on getting this implemented. Nearly done with
it. Just got some extra spaces that shouldn't be in output.


but in macros the context information is much more interesting then 
anything else - so #pragma(error,...) won't fully fit the needs to 
context error returning - it could be much more then just a message


example: to help the compiler forming better error messages - or maybe 
recover from deep macro expansion etc...)


what you want is just #pragma(error,...) - break compiliation now
Jacob is talking about the feedback for the compiler thing...




Re: Start of dmd 2.064 beta program

2013-10-31 Thread dennis luehring

Must always use script_no1 or script_no1.d?


And maybe one day I have a lot of .py files that I intend to
replace with D scripts TRANSPARENTLY for their user.

Will D bow at me why I use the .py extension?

Is D trying to shoot his own foot? It really seems to succeed
quite well.

My boss is right: is just a toy pretending to be serious.


sorry, but this is a very stupid comment:

1. never ever was a language successful(or not) because
of its file-extension behavior - maybe in your world

2. i hope there is no other tool around try to find/analyse/whatever 
real Python programs by using the extension - else you need to change 
that anyway


3. My boss is right: is just a toy pretending to be serious - maybe, 
maybe not - but not because of your stupid file extension comments


thx




Re: Start of dmd 2.064 beta program

2013-10-31 Thread dennis luehring

Am 31.10.2013 15:29, schrieb eles:

On Thursday, 31 October 2013 at 14:28:05 UTC, dennis luehring
wrote:

3. My boss is right: is just a toy pretending to be serious -
maybe, maybe not - but not because of your stupid file
extension comments


It adds. Tell to my boss about that extensions and he will be
grateful for you providing him ONE MORE REASON to laugh. At me.


question: why are you using D if

1. Python works for you
2. Python doesnt suffer from the BIG-BIG file-extension problem
3. your laughing Boss tells you D is a toy

i don't get it

better try to find a more experienced, serious Boss



Re: Start of dmd 2.064 beta program

2013-10-31 Thread dennis luehring

Am 31.10.2013 15:45, schrieb eles:

On Thursday, 31 October 2013 at 14:39:34 UTC, dennis luehring
wrote:

Am 31.10.2013 15:29, schrieb eles:

On Thursday, 31 October 2013 at 14:28:05 UTC, dennis luehring
wrote:

3. My boss is right: is just a toy pretending to be serious


better try to find a more experienced, serious Boss


Do you offer yourself for his job?

Maybe because I don't want to have a code base written in several
languages?

And seriously, about your former argument about the importance of
the extension in being serious or not: accepting arbitrary
extension was the reason for C++ doom?


just 0,001% of it - but a clear definition is always bettern then a 
floaty like you should use .d as extension




Re: Start of dmd 2.064 beta program

2013-10-31 Thread dennis luehring

Am 31.10.2013 15:45, schrieb eles:

On Thursday, 31 October 2013 at 14:39:34 UTC, dennis luehring
wrote:

Am 31.10.2013 15:29, schrieb eles:

On Thursday, 31 October 2013 at 14:28:05 UTC, dennis luehring
wrote:

3. My boss is right: is just a toy pretending to be serious


better try to find a more experienced, serious Boss


Do you offer yourself for his job?


why should i?




Re: Start of dmd 2.064 beta program

2013-10-31 Thread dennis luehring

Am 31.10.2013 16:01, schrieb eles:

On Thursday, 31 October 2013 at 14:57:15 UTC, dennis luehring
wrote:

Am 31.10.2013 15:45, schrieb eles:

On Thursday, 31 October 2013 at 14:39:34 UTC, dennis luehring
wrote:

Am 31.10.2013 15:29, schrieb eles:

On Thursday, 31 October 2013 at 14:28:05 UTC, dennis luehring
wrote:

3. My boss is right: is just a toy pretending to be
serious


better try to find a more experienced, serious Boss


Do you offer yourself for his job?


why should i?


Then don't tell me what I should feel to do about my job.

'Cause you don't deserve other answer than why should I?



i don't see any chance/strategy to get D in your current development - 
so if you don't want to code Python (I WANT pointers) anymore - try to
find a job where you can write C/C++ or D - or else your need (and real 
hard interest to get your Boss in the Boart) for D seems to be not big 
enough - i would quit my job very fast if someone forces me to write 
Python code most of the time - thats all




Re: Start of dmd 2.064 beta program

2013-10-31 Thread dennis luehring

Am 31.10.2013 16:22, schrieb eles:

On Thursday, 31 October 2013 at 15:13:20 UTC, dennis luehring
wrote:

Am 31.10.2013 16:01, schrieb eles:

On Thursday, 31 October 2013 at 14:57:15 UTC, dennis luehring
wrote:

Am 31.10.2013 15:45, schrieb eles:

On Thursday, 31 October 2013 at 14:39:34 UTC, dennis luehring
wrote:

Am 31.10.2013 15:29, schrieb eles:

On Thursday, 31 October 2013 at 14:28:05 UTC, dennis
luehring

i don't see any chance/strategy to get D in your current
development - so if you don't want to code Python (I WANT
pointers) anymore - try to
find a job where you can write C/C++ or D - or else your need
(and real hard interest to get your Boss in the Boart) for D
seems to be not big enough - i would quit my job very fast if
someone forces me to write Python code most of the time - thats
all


Frankly, just stop advising me to take a new job. It is the kind
of advice that I really find intrusive and unbearable.


no problem :)

so tell the story what would happen if D scripts will be without .d?
is your Boss then more interested or can you introduce D-scripts then 
silently - what would happen?




Re: Start of dmd 2.064 beta program

2013-10-31 Thread dennis luehring

Am 31.10.2013 17:44, schrieb Leandro Lucarella:

dennis luehring, el 31 de October a las 15:28 me escribiste:

Must always use script_no1 or script_no1.d?

And maybe one day I have a lot of .py files that I intend to
replace with D scripts TRANSPARENTLY for their user.

Will D bow at me why I use the .py extension?

Is D trying to shoot his own foot? It really seems to succeed
quite well.

My boss is right: is just a toy pretending to be serious.

sorry, but this is a very stupid comment:

1. never ever was a language successful(or not) because
of its file-extension behavior - maybe in your world

2. i hope there is no other tool around try to find/analyse/whatever
real Python programs by using the extension - else you need to
change that anyway

3. My boss is right: is just a toy pretending to be serious -
maybe, maybe not - but not because of your stupid file extension
comments


I think even when the wording isn't the best, what he says is true.
Sometimes is hard to sell the language when things that are so trivial
and fundamental (as letting file names have arbitrary names, at least
for scripts) not only are broken, but are justified by the community.

That's definitely not serious and discouraging.



sorry for my wording - but pressure sentence like My boss is right: is 
just a toy pretending to be serious aren't fair also


Re: More on C++ stack arrays

2013-10-23 Thread dennis luehring

Am 23.10.2013 15:59, schrieb Namespace:

On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright wrote:

On 10/20/2013 7:25 AM, bearophile wrote:

More discussions about variable-sized stack-allocated arrays
in C++, it seems
there is no yet a consensus:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

I'd like variable-sized stack-allocated arrays in D.


They're far more trouble than they're worth.

Just use:

auto a = new T[n];

Stack allocated arrays are far more trouble than they're worth.
But what about efficiency? Here's what I often do something
along the lines of:

T[10] tmp;
T[] a;
if (n = 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;

The size of the static array is selected so the dynamic
allocation is almost never necessary.


Another idea would be to use something like this:
http://dpaste.dzfl.pl/8613c9be
It has a syntax similar to T[n] and is likely more efficient
because the memory is freed when it is no longer needed. :)



but it would be still nice to change the 4096 size by template parameter 
maybe defaulted to 4096 :)


  1   2   3   4   5   >