Re: Draft for DIP concerning destroy is up.

2018-05-27 Thread Go-Write-A-DIP via Digitalmars-d

On Saturday, 26 May 2018 at 02:25:30 UTC, 12345swordy wrote:


k

Though I had posted a thread regarding my DIP and no one seems 
to care.



Alex



When someone has an idea, that you don't care about, you tell 
them to go write a DIP.


Apparently, that is the more acceptable 'etiquette', rather than 
telling them to FO.




Building a standalone druntime for LDC

2018-05-13 Thread A. Nicholi via Digitalmars-d

Hello,

I am trying to build LDC’s druntime by itself (without phobos), 
but it seems that druntime expects to be built alongside DMD, 
which is strange considering I’m building LDC’s version of the 
druntime. Make says: “No rule to make target 
'../dmd/src/osmodel.mak'”


Is it possible to build druntime by itself for LDC? Must I build 
LDC alongside it for every host platform, or may I just specify 
‘-defaultlib=’ with a distro-provided LDC? How should I go about 
doing this?


Re: Using D without libphobos

2018-04-28 Thread A. Nicholi via Digitalmars-d

On Friday, 27 April 2018 at 00:21:51 UTC, sarn wrote:
I think you've got the technical answer already (just don't 
link in phobos2) but I'll add my 2c that Phobosless programming 
isn't just possible but realistically doable.  It's a shame to 
go without Phobos because it's got so much higher-level 
functionality, but if you're okay with writing things from the 
ground up in C-like code, it works.  Most third-party D 
libraries depend on Phobos, though, so it's mostly DIY and 
using bindings to other languages.


Right, that’s the plan for the program itself. Even without D we 
would be talking to services as directly as possible, and third 
party libraries would come in as C code and be tweaked to work 
in-house. Performance is paramount with this, which is why I was 
so pleased to find out about the ‘-betterC’ switch in 
development, even though we may have gotten by with a minimal D 
runtime. The ecosystem at work here is really something else :)


On Friday, 27 April 2018 at 00:21:51 UTC, sarn wrote:

Have a look at the standard library here:
https://dlang.org/phobos/index.html

Basically, all the std.* is Phobos, and most of the core.* 
stuff needs linking with the D runtime library. However, 
there's a bunch of bindings to standard C stuff in core.stdc.* 
that only need the C runtime and standard library, and these 
come in really handy when you're not using Phobos.  (There are 
more platform-specific C bindings in core.sys.* that aren't 
listed on that page.)  Also, there are some other things in 
core.bitop and core.atomic (probably more) that don't need 
linking to druntime.


Right. So there isn’t anything in core.* that would be in 
libphobos, only D runtime? And std.* depends on both, that is 
correct? Just want to be sure there aren’t edge cases or 
exceptions, it would be a handy rule of thumb.


Re: Using D without libphobos

2018-04-26 Thread A. Nicholi via Digitalmars-d

On Thursday, 26 April 2018 at 06:32:14 UTC, Radu wrote:


LDC allows specifying default libs, something like 
`-defaultlib=phobos2-ldc,druntime-ldc`; you can remove phobos 
from the list.


Alright, that sounds simple enough. Thanks!

On Thursday, 26 April 2018 at 06:32:14 UTC, Radu wrote:
I think you can use any symbol that is compile time only from 
phobos (like std.traits and std.meta), and you can take stuff 
from phobos and adapt it to your own needs. I use the compiler 
explorer (https://d.godbolt.org/) to check the generate code 
for symbols added.


Didn’t know Godbolt supported D. Thanks for sharing :)

On Thursday, 26 April 2018 at 06:32:14 UTC, Radu wrote:
Something to remember though, if you intend to commit to 
serious commercial use, you should be prepared to dig in and 
fix stuff that doesn't work, or add new functionality, or 
sponsor the foundation for support. D and LDC are opensource 
projects, and they have mostly good quality implementation, but 
they are not commercial grade top of the line products.


Right. We’re prepared for working around that, especially since 
we’ll be targeting platforms and architectures that LDC boasts 
only fleeting support for.


Re: Using D without libphobos

2018-04-26 Thread A. Nicholi via Digitalmars-d

On Thursday, 26 April 2018 at 03:53:54 UTC, Mike Franklin wrote:


I suggest reading the following 2 items before digging deeper:
https://dlang.org/blog/2017/08/23/d-as-a-better-c/
https://dlang.org/changelog/2.079.0.html#minimal_runtime


I didn’t know D had begun offering serious decoupling like that. 
With something like this, we may very well be able to avoid 
writing C entirely, at least in-house! Thank you for bringing 
that up.


On Thursday, 26 April 2018 at 03:53:54 UTC, Mike Franklin wrote:
The compiler uses the C compiler (unfortunately again) to do 
its linking; and it becomes evident that D is in some ways a 
layer on top of C.


You can compile some D programs without linking libphobos2, but 
will require separate compilation and linking because the 
compiler itself actually hard-codes the call to the linker 
(actually the C compiler as demonstrated above).  Example 3 at 
https://dlang.org/changelog/2.079.0.html#minimal_runtime 
demonstrates this.


If you use that method, you won't be able to use certain 
features of D that have runtime implementations.  The obvious 
ones are classes, dynamic arrays, and exceptions.


I could go on, but I'd have to make some assumptions about what 
you're really after.  Feel free to ask more specific questions 
and I'll be happy to share what I know (or at least what I 
think I know; sometimes I'm wrong).


Mike


So in a way, the D runtime is similar to libstdc++, providing 
implementations of runtime language features. But it is also like 
C++ in that those language features can be avoided, correct? At 
least with the use of minimal D, I mean. This means that as a 
language, there is enough granularity to theoretically provide as 
few or as many features as one desires for their use case, making 
the penning of new C and C++ code redundant? Do I get this right?


Using D without libphobos

2018-04-25 Thread A. Nicholi via Digitalmars-d

Hello,

I am working on a large cross-platform project, which will be 
written primarily in D, interfacing to C as necessary. To get 
finer control over memory safety, binary size, and use of the GC, 
we would like to disclude libphobos as a dependency in lieu of 
our own code. The project is compiled using LDC.


I am not sure if this is possible though, as it seems there are 
certain things in libphobos that are tightly coupled into the D 
runtime. There are several things in the core namespace that 
would be helpful for us (SIMD, C bindings, etc), but I am not 
sure if that is not also part of libphobos along with the other 
namespaces.


How do I remove libphobos as a runtime dependency with ld and 
MSVC’s link.exe? Is it possible to decouple core from other parts 
of the runtime, and if so, how?


Regards,
A. Nicholi


Re: Error: @nogc function 'test.func2' cannot call non-@nogc delegate 'msg'

2017-12-10 Thread A Guy With a Question via Digitalmars-d
On Sunday, 10 December 2017 at 11:44:20 UTC, Jonathan M Davis 
wrote:
On Sunday, December 10, 2017 12:54:00 Shachar Shemesh via 
Digitalmars-d wrote:

void func1(scope lazy string msg) @nogc {
}

void func2(scope lazy string msg) @nogc {
 func1(msg);
}

What? Why is msg GC allocating, especially since I scoped the 
lazy? Why is msg even evaluated?


Something seems off here.


Well, I'm not exactly sure how lazy is implemented underneath 
the hood (other than the fact that it generates a delegate and 
potentially a closure), but based on the error message, it 
sounds like the delegate that's being generated isn't @nogc, so 
it can't be called within the function, which would be a 
completely different issue from allocating a closure.


Curiously, pure doesn't seem to have the same problem, and I 
would have guessed that it would given that @nogc does.


But if it matters that the delegate be pure or @nogc because 
the function is marked that way, then that's a bit of a 
problem, because presumably, there can only be one delegate 
(since the function isn't templated), and the same function 
could be called with an expression that allocated and with an 
expression that didn't allocate. So, if the delegate is 
restricted like that, then that would restrict the caller, 
which doesn't follow how non-lazy arguments work, and an 
argument could be made that whether calling the delegate 
allocates memory or not or is pure or not doesn't matter even 
if the function being called is @nogc or pure on the basis that 
it's conceptually just a delayed evaluation of the argument in 
the caller's scope. But for that to work, the generated 
delegate can't be checked for @nogc or pure or nothrow or any 
of that inside the function with the lazy parameter. Given that 
pure works but @nogc doesn't, it makes it seem like the 
compiler was made smart enough to ignore purity for lazy 
parameters but not smart enough to ignore the lack of @nogc.


As for scope, I don't know if it really applies to lazy 
parameters or not at this point. Without -dip1000, all it 
applies to is delegates, but lazy parameters are turned into 
delegates, so you would _think_ that scope would apply, but I 
don't know. scope has always been underimplemented, though 
Walter's work on -dip1000 is fixing that. Regardless, the error 
message makes it sound like scope and closures have nothing to 
do with the problem (though it could potentially be a problem 
once the @nogc problem with the delegate is fixed).


In any case, I think that it's pretty clear that this merits 
being reported in bugzilla. The fact that pure works while 
@nogc doesn't strongly indicates that something is off with 
@nogc - especially if scope is involved. Worst case, a fix will 
have to be lumped in with -dip1000, depending on what scope is 
supposed to be doing now and exactly how lazy is working 
underneath the hood, but I definitely think that your code 
should work.


- Jonathan M Davis


What does D throw so many errors on code people don't write? I 
realize that "lowering" has some uses and I'm not advocating 
*not* doing it, but why not after a semantic pass? Why is it so 
consistently throwing semantic errors on code it's users did not 
write...


It would cut down on the cryptic errors...


Re: Embedded Containers

2017-12-05 Thread A Guy With a Question via Digitalmars-d
On Tuesday, 5 December 2017 at 22:21:51 UTC, Jonathan M Davis 
wrote:
On Tuesday, December 05, 2017 22:09:12 A Guy With a Question 
via Digitalmars-d wrote:

Is there actually a difference between the c style cast and
cast(type)? Other than verbosity...


They're not the same. D's cast is not split up like C++'s casts 
are, but it's not exactly the same as C's cast either - e.g. 
like C++'s dynamic_cast, if a class to class conversion fails, 
you get null, which C's cast doesn't do. Also, I think that D's 
cast is pickier about what it will let you do, whereas C's cast 
is more likely to want to smash something into something else 
if you ask it even if it doesn't make sense. And of course, D's 
cast understands D stuff that doesn't even exist in C (like 
delegates). I don't know exactly what all of the differences 
are though.


Regardless, the reason for the verbosity is so that you can 
easily grep for casts in your code.


- Jonathan M Davis


That's the best reason for verbosity I've heard.


Re: Embedded Containers

2017-12-05 Thread A Guy With a Question via Digitalmars-d

On Tuesday, 5 December 2017 at 20:38:01 UTC, Timon Gehr wrote:

On 05.12.2017 20:11, H. S. Teoh wrote:
On Tue, Dec 05, 2017 at 07:09:50PM +, Adam D. Ruppe via 
Digitalmars-d wrote:
On Tuesday, 5 December 2017 at 19:01:48 UTC, A Guy With a 
Question wrote:

alias Items(T) = Array!Item(T);


try:

Array!(Item!(T))

I'm not quite sure I understand how to create a generic 
container

interface or class in D.


Just using the parenthesis should help. The thing with A!B!C 
is that
the reader can easily be confused: did you mean A!(B!C) or 
(A!B)!C or
what? Now the compiler is a bit stupider about this than it 
should be
IMO - it should be able to figure it out and the meaning be 
fairly
clear with association - but it isn't, so you can and must 
write

parens to clarify most the time.


Here's an idea for a DIP: make '!' right-to-left associative 
(i.e.,
similar to the ^^ exponentiation operator), so that A!B!C is 
understood

as A!(B!C).

Rationale: the most common use cases are of the A!(B!C) 
variety; it's
pretty rare IME to need the (A!B)!C form, since usually a 
template
expands to a type, which can then be passed to another 
template, i.e.,
A!(B!C).  The (A!B)!C form is when the template instantiated 
with B
produces another template that takes another type argument.  
There

aren't many use cases for this that I can think of.
...


Curried templates are actually common enough in Phobos. (map, 
filter, etc.)


Though the point is probably moot, because in the current 
grammar you'd
need parentheses as soon as the template argument is anything 
more than

a single token, i.e., you can write A!int but you have to write
A!(const(int)).  And in the cases where you actually want 
something of

the form A!(B!C), usually the arguments are themselves pretty
complicated, so wouldn't benefit from top-level associativity 
anyway.



T



IMHO the inconsistency with function call syntax ought to kill 
this proposal. Also, it is a bit more confusing than it is 
useful.


Related: It's quite annoying that (A!B)!C does not work because 
the parser thinks it's a C-style cast and errors out even 
though C-style casts are not even valid D syntax.


Is there actually a difference between the c style cast and 
cast(type)? Other than verbosity...


Re: Embedded Containers

2017-12-05 Thread A Guy With a Question via Digitalmars-d

On Tuesday, 5 December 2017 at 19:19:50 UTC, colin wrote:
On Tuesday, 5 December 2017 at 19:13:10 UTC, A Guy With a 
Question wrote:
On Tuesday, 5 December 2017 at 19:09:50 UTC, Adam D. Ruppe 
wrote:

[...]


Ok, so that worked. I still have the problem with importing 
though:


mypackage: Item

seems to generate the error:

"Error: undefined identifier 'Item'"

Which is weird, because I'm able to bring in Array through 
std.container.array: Array;


Is Item public in your package?


Yes. I fixed it by not referencing it by the package but by the 
file specific module I created. That worked. All errors are 
resolved now. Thanks!


I think maybe the import issue was because there was a circular 
import happening.


So I have a few sub modules:

   module
  item.d
  other.d
   package.d

where other.d uses Item from item.d. But I was pulling item from 
package. When I pulled it directly from item.d it compiled fine. 
So maybe it can't handle the circular referencing there.


Re: Embedded Containers

2017-12-05 Thread A Guy With a Question via Digitalmars-d

On Tuesday, 5 December 2017 at 19:09:50 UTC, Adam D. Ruppe wrote:
On Tuesday, 5 December 2017 at 19:01:48 UTC, A Guy With a 
Question wrote:

alias Items(T) = Array!Item(T);


try:

Array!(Item!(T))

I'm not quite sure I understand how to create a generic 
container interface or class in D.


Just using the parenthesis should help. The thing with A!B!C is 
that the reader can easily be confused: did you mean A!(B!C) or 
(A!B)!C or what? Now the compiler is a bit stupider about this 
than it should be IMO - it should be able to figure it out and 
the meaning be fairly clear with association - but it isn't, so 
you can and must write parens to clarify most the time.


Ok, so that worked. I still have the problem with importing 
though:


mypackage: Item

seems to generate the error:

"Error: undefined identifier 'Item'"

Which is weird, because I'm able to bring in Array through 
std.container.array: Array;


Re: Embedded Containers

2017-12-05 Thread A Guy With a Question via Digitalmars-d


Ah crud, I posted this to the wrong forum. Sorry.


Embedded Containers

2017-12-05 Thread A Guy With a Question via Digitalmars-d

The following doesn't appear to be valid syntax. Array!Item!T

I get the following error:

"multiple ! arguments are not allowed"

Which is ok...I get THAT error, however, this does not work 
either:


alias Items(T) = Array!Item(T);

This gives me the error:

Error: function declaration without return type. (Note that 
constructors are always named `this`)	


Item is defined as follows:

interface Item(T)
{
   @property
   T member();
}

That part compiles fine. However, even when I remove the 
aliasing, I can't import this interface. I get "Error: undefined 
identifier 'Item'"


I'm not quite sure I understand how to create a generic container 
interface or class in D. Half of how I expect it to work works, 
but the other half doesn't. The docs aren't too helpful. I'm not 
sure if it's a case where there's just too much to go through or 
if what I'm trying to do isn't really covered. Essentially I'm 
trying to create an array of this type 'Item' that has some 
generic members. I think people who come from C# will kind of get 
what I'm trying to do here, because I'm trying to port C# code.


Re: First Impressions!

2017-12-01 Thread A Guy With a Question via Digitalmars-d
On Friday, 1 December 2017 at 18:31:46 UTC, Jonathan M Davis 
wrote:
On Friday, December 01, 2017 09:49:08 Steven Schveighoffer via 
Digitalmars-d wrote:

On 12/1/17 7:26 AM, Patrick Schluter wrote:
> On Friday, 1 December 2017 at 06:07:07 UTC, Patrick Schluter 
> wrote:

>>  isolated codepoints.
>
> I meant isolated code-units, of course.

Hehe, it's impossible for me to talk about code points and 
code units without having to pause and consider which one I 
mean :)


What, you mean that Unicode can be confusing? No way! ;)

LOL. I have to be careful with that too. What bugs me even more 
though is that the Unicode spec talks about code points being 
characters, and then talks about combining characters for 
grapheme clusters - and this in spite of the fact that what 
most people would consider a character is a grapheme cluster 
and _not_ a code point. But they presumably had to come up with 
new terms for a lot of this nonsense, and that's not always 
easy.


Regardless, what they came up with is complicated enough that 
it's arguably a miracle whenever a program actually handles 
Unicode text 100% correctly. :|


- Jonathan M Davis


And dealing with that complexity can often introduce bugs in 
their own right, because it's hard to get right. That's why 
sometimes it's easy just to simplify things and to exclude 
certain ways of looking at the string.


Re: First Impressions!

2017-12-01 Thread A Guy With a Question via Digitalmars-d
On Friday, 1 December 2017 at 06:07:07 UTC, Patrick Schluter 
wrote:
On Thursday, 30 November 2017 at 19:37:47 UTC, Steven 
Schveighoffer wrote:

On 11/30/17 1:20 PM, Patrick Schluter wrote:
On Thursday, 30 November 2017 at 17:40:08 UTC, Jonathan M 
Davis wrote:
English and thus don't as easily hit the cases where their 
code is wrong. For better or worse, UTF-16 hides it better 
than UTF-8, but the problem exists in both.




To give just an example of what can go wrong with UTF-16. 
Reading a file in UTF-16 and converting it tosomething else 
like UTF-8 or UTF-32. Reading block by block and hitting 
exactly a SMP codepoint at the buffer limit, high surrogate 
at the end of the first buffer, low surrogate at the start of 
the next. If you don't think about it => 2 invalid characters 
instead of your nice poop 💩 emoji character (emojis are in 
the SMP and they are more and more frequent).


iopipe handles this: 
http://schveiguy.github.io/iopipe/iopipe/textpipe/ensureDecodeable.html




It was only to give an example. With UTF-8 people who implement 
the low level code in general think about the multiple 
codeunits at the buffer boundary. With UTF-16 it's often 
forgotten. In UTF-16 there are also 2 other common pitfalls, 
that exist also in UTF-8 but are less consciously acknowledged, 
overlong encoding and isolated codepoints. So UTF-16 has the 
same issues as UTF-8, plus some more, endianness and size.


Most problems with UTF16 is applicable to UTF8. The only issue 
that isn't, is if you are just dealing with ASCII it's a bit of a 
waste of space.


Re: First Impressions!

2017-11-30 Thread A Guy With a Question via Digitalmars-d
On Thursday, 30 November 2017 at 17:56:58 UTC, Jonathan M Davis 
wrote:
On Thursday, November 30, 2017 03:37:37 Walter Bright via 
Digitalmars-d wrote:
Language-wise, I think that most of the UTF-16 is driven by the 
fact that Java went with UCS-2 / UTF-16, and C# followed them 
(both because they were copying Java and because the Win32 API 
had gone with UCS-2 / UTF-16). So, that's had a lot of 
influence on folks, though most others have gone with UTF-8 for 
backwards compatibility and because it typically takes up less 
space for non-Asian text. But the use of UTF-16 in Windows, 
Java, and C# does seem to have resulted in some folks thinking 
that wide characters means Unicode, and narrow characters 
meaning ASCII.



- Jonathan M Davis


I think it also simplifies the logic. You are not always looking 
to represent the codepoints symbolically. You are just trying to 
see what information is in it. Therefore, if you can practically 
treat a codepoint as the unit of data behind the scenes, it 
simplifies the logic.


Re: First Impressions!

2017-11-30 Thread A Guy With a Question via Digitalmars-d
On Thursday, 30 November 2017 at 17:56:58 UTC, Jonathan M Davis 
wrote:
On Thursday, November 30, 2017 03:37:37 Walter Bright via 
Digitalmars-d wrote:

On 11/30/2017 2:39 AM, Joakim wrote:
> Java, .NET, Qt, Javascript, and a handful of others use 
> UTF-16 too, some starting off with the earlier UCS-2:

>
> https://en.m.wikipedia.org/wiki/UTF-16#Usage
>
> Not saying either is better, each has their flaws, just 
> pointing out it's more than just Windows.


I stand corrected.


I get the impression that the stuff that uses UTF-16 is mostly 
stuff that picked an encoding early on in the Unicode game and 
thought that they picked one that guaranteed that a code unit 
would be an entire character.


I don't think that's true though. Haven't you always been able to 
combine two codepoints into one visual representation (Ä for 
example). To me it's still two characters to look for when going 
through the string, but the UI or text interpreter might choose 
to combine them. So in certain domains, such as trying to 
visually represent the character, yes a codepoint is not a 
character, if by what you mean by character is the visual 
representation. But what we are referring to as a character can 
kind of morph depending on context. When you are running through 
the data though in the algorithm behind the scenes, you care 
about the *information* therefore the codepoint. And we are 
really just have a semantics battle if someone calls that a 
character.


Many of them picked UCS-2 and then switched later to UTF-16, 
but once they picked a 16-bit encoding, they were kind of stuck.


Others - most notably C/C++ and the *nix world - picked UTF-8 
for backwards compatibility, and once it became clear that 
UCS-2 / UTF-16 wasn't going to cut it for a code unit 
representing a character, most stuff that went Unicode went 
UTF-8.


That's only because C used ASCII and thus was a byte. UTF-8 is 
inline with this, so literally nothing needs to change to get 
pretty much the same behavior. It makes sense. With this this in 
mind, it actually might make sense for D to use it.







Re: First Impressions!

2017-11-30 Thread A Guy With a Question via Digitalmars-d
On Thursday, 30 November 2017 at 11:41:09 UTC, Walter Bright 
wrote:

On 11/30/2017 2:47 AM, Nicholas Wilson wrote:
As far as I can tell, pretty much the only users of UTF16 are 
Windows programs. Everyone else uses UTF8 or UCS32.
I assume you meant UTF32 not UCS32, given UCS2 is Microsoft's 
half-assed UTF16.


I meant UCS-4, which is identical to UTF-32. It's hard keeping 
all that stuff straight. Sigh.


https://en.wikipedia.org/wiki/UTF-32


It's also worth mentioning that the more I think about it, the 
UTF8 vs. UTF16 thing was probably not worth mentioning with the 
rest of the things I listed out. It's pretty minor and more of a 
preference.


Re: First Impressions!

2017-11-30 Thread A Guy With a Question via Digitalmars-d
On Thursday, 30 November 2017 at 10:19:18 UTC, Walter Bright 
wrote:

On 11/27/2017 7:01 PM, A Guy With an Opinion wrote:
+- Unicode support is good. Although I think D's string type 
should have probably been utf16 by default. Especially 
considering the utf module states:


"UTF character support is restricted to '\u' <= character 
<= '\U0010'."


Seems like the natural fit for me. Plus for the vast majority 
of use cases I am pretty guaranteed a char = codepoint. Not 
the biggest issue in the world and maybe I'm just being overly 
critical here.


Sooner or later your code will exhibit bugs if it assumes that 
char==codepoint with UTF16, because of surrogate pairs.


https://stackoverflow.com/questions/5903008/what-is-a-surrogate-pair-in-java

As far as I can tell, pretty much the only users of UTF16 are 
Windows programs. Everyone else uses UTF8 or UCS32.


I recommend using UTF8.


As long as you understand it's limitations I think most bugs can 
be avoided. Where UTF16 breaks down, is pretty well defined. 
Also, super rare. I think UTF32 would be great to, but it seems 
like just a waste of space 99% of the time. UTF8 isn't horrible, 
I am not going to never use D because it uses UTF8 (that would be 
silly). Especially when wstring also seems baked into the 
language. However, it can complicate code because you pretty much 
always have to assume character != codepoint outside of ASCII. I 
can see a reasonable person arguing that it forcing you assume 
character != code point is actually a good thing. And that is a 
valid opinion.


Re: Thoughts about D

2017-11-29 Thread A Guy With a Question via Digitalmars-d

On Thursday, 30 November 2017 at 00:23:10 UTC, codephantom wrote:
On Thursday, 30 November 2017 at 00:05:10 UTC, Michael V. 
Franklin wrote:
On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh 
wrote:


Doesn't this mean that we should rather focus our efforts on 
improving druntime instead of throwing out the baby with the 
bathwater with BetterC?


Exactly!  We should be making a better D, not a better C.

Mike


There is no better C, than C, full stop.

-betterC  should become ..   -slimD

But we really do need a focus on both -slimD and -bloatyD

For D to be successful, it needs to be a flexible language that 
enables programmer choice. We don't all have the same problems 
to solve.


C is not successful because of how much it constrains you.


I'm personally a big believer that the thing that will replace C 
is going to be something that is flexible, but more than anything 
prevents the security bugs that plague the web right now. Things 
like heartbleed are preventable with safety guarantees that don't 
prevent fast code. Rust has some good ideas, but so does D.


Re: First Impressions!

2017-11-29 Thread A Guy With a Question via Digitalmars-d

On Tuesday, 28 November 2017 at 22:08:48 UTC, Mike Parker wrote:
On Tuesday, 28 November 2017 at 19:39:19 UTC, Michael V. 
Franklin wrote:


This DIP is related 
(https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md) 
but I don't know what's happening with it.




It's awaiting formal review. I'll move it forward when the 
formal review queue clears out a bit.


How well does phobos play with it? I'm finding, for instance, 
it's not playing too well with nothrow. Things throw that I don't 
understand why.


Re: First Impressions!

2017-11-28 Thread A Guy With an Opinion via Digitalmars-d

On Tuesday, 28 November 2017 at 16:24:56 UTC, Adam D. Ruppe wrote:


That doesn't quite work since it doesn't descend into 
aggregates. And you can't turn most them off.


I take it adding those inverse attributes is no trivial thing?


Re: First Impressions!

2017-11-28 Thread A Guy With an Opinion via Digitalmars-d
On Tuesday, 28 November 2017 at 13:17:16 UTC, Steven 
Schveighoffer wrote:

https://github.com/schveiguy/dcollections


On Tuesday, 28 November 2017 at 03:37:26 UTC, rikki cattermole 
wrote:

https://github.com/economicmodeling/containers


Thanks. I'll check both out. It's not that I don't want to write 
them, it's just I don't want to stop what I'm doing when I need 
them and write them. It takes me out of my thought process.


Re: First Impressions!

2017-11-28 Thread A Guy With an Opinion via Digitalmars-d
On Tuesday, 28 November 2017 at 13:17:16 UTC, Steven 
Schveighoffer wrote:
This is likely because of Adam's suggestion -- you were 
incorrectly declaring a function that returned an immutable 
like this:



immutable T foo();

-Steve


That's exactly what it was I think. As I stated before, I tried 
to do immutable(T) but I was drowning in errors at that point 
that I just took a step back. I'll try to refactor it back to 
using immutable. I just honestly didn't quite know what I was 
doing obviously.




Re: First Impressions!

2017-11-27 Thread A Guy With an Opinion via Digitalmars-d
On Tuesday, 28 November 2017 at 05:16:54 UTC, Michael V. Franklin 
wrote:
On Tuesday, 28 November 2017 at 04:48:57 UTC, A Guy With an 
Opinion wrote:


I'd be happy to submit an issue, but I'm not quite sure I'd be 
the best to determine an error message (at least not this 
early). Mainly because I have no clue what it was yelling at 
me about. I only new to add static because I told people my 
intentions and they suggested it. I guess having a non 
statically marked class is a valid feature imported from Java 
world.


If this was on the forum, please point me to it.  I'll see if I 
can understand what's going on and do something about it.


Thanks,
Mike


https://forum.dlang.org/thread/vcvlffjxowgdvpvjs...@forum.dlang.org


Re: First Impressions!

2017-11-27 Thread A Guy With an Opinion via Digitalmars-d
On Tuesday, 28 November 2017 at 04:37:04 UTC, Michael V. Franklin 
wrote:
Please submit things like this to the issue tracker.  They are 
very easy to fix, and if I'm aware of them, I'll probably do 
the work.  But, please provide a code example and
offer a suggestion of what you would prefer it to say; it just 
makes things easier.>


I'd be happy to submit an issue, but I'm not quite sure I'd be 
the best to determine an error message (at least not this early). 
Mainly because I have no clue what it was yelling at me about. I 
only new to add static because I told people my intentions and 
they suggested it. I guess having a non statically marked class 
is a valid feature imported from Java world. I'm just not as 
familiar with that specific feature of Java. Therefore I have no 
idea what the text really had to do with anything. Maybe 
appending "if you meant to make a static class" would have been 
helpful. I fiddled with Rust a little too, and it's what they 
tend to do very well. Make verbose error messages.



We're not alone:  https://youtu.be/6_xdfSVRrKo?t=353


And he was so much better at articulating it than I was. Another 
C# guy though. :)




Re: First Impressions!

2017-11-27 Thread A Guy With an Opinion via Digitalmars-d

On Tuesday, 28 November 2017 at 04:24:46 UTC, Adam D. Ruppe wrote:

immutable(int) errorCount() { return ...; }


I actually did try something like that, because I remembered 
seeing the parens around the string definition. I think at that 
point I was just so riddled with errors I just took a step back 
and went back to something I know. Just to make sure I wasn't 
going insane.




Re: First Impressions!

2017-11-27 Thread A Guy With an Opinion via Digitalmars-d
On Tuesday, 28 November 2017 at 04:17:18 UTC, A Guy With an 
Opinion wrote:

On Tuesday, 28 November 2017 at 04:12:14 UTC, ketmar wrote:

A Guy With an Opinion wrote:

That is true, but I'm still unconvinced that making the 
person's program likely to error is better than initializing 
a number to 0. Zero is such a fundamental default for so many 
things. And it would be consistent with the other number 
types.
basically, default initializers aren't meant to give a "usable 
value", they meant to give a *defined* value, so we don't have 
UB. that is, just initialize your variables explicitly, don't 
rely on defaults. writing:


int a;
a += 42;

is still bad code, even if you're know that `a` is guaranteed 
to be zero.


int a = 0;
a += 42;

is the "right" way to write it.

if you'll look at default values from this PoV, you'll see 
that NaN has more sense that zero. if there was a NaN for 
ints, ints would be inited with it too. ;-)


Eh...I still don't agree. I think C and C++ just gave that 
style of coding a bad rap due to the undefined behavior. But 
the issue is it was undefined behavior. A lot of language 
features aim to make things well defined and have less verbose 
representations. Once a language matures that's what a big 
portion of their newer features become. Less verbose shortcuts 
of commonly done things. I agree it's important that it's well 
defined, I'm just thinking it should be a value that someone 
actually wants some notable fraction of the time. Not something 
no one wants ever.


I could be persuaded, but so far I'm not drinking the koolaid 
on that. It's not the end of the world, I was just confused 
when my float was NaN.


Also, C and C++ didn't just have undefined behavior, sometimes it 
has inconsistent behavior. Sometimes int a; is actually set to 0.


Re: First Impressions!

2017-11-27 Thread A Guy With an Opinion via Digitalmars-d

On Tuesday, 28 November 2017 at 04:12:14 UTC, ketmar wrote:

A Guy With an Opinion wrote:

That is true, but I'm still unconvinced that making the 
person's program likely to error is better than initializing a 
number to 0. Zero is such a fundamental default for so many 
things. And it would be consistent with the other number types.
basically, default initializers aren't meant to give a "usable 
value", they meant to give a *defined* value, so we don't have 
UB. that is, just initialize your variables explicitly, don't 
rely on defaults. writing:


int a;
a += 42;

is still bad code, even if you're know that `a` is guaranteed 
to be zero.


int a = 0;
a += 42;

is the "right" way to write it.

if you'll look at default values from this PoV, you'll see that 
NaN has more sense that zero. if there was a NaN for ints, ints 
would be inited with it too. ;-)


Eh...I still don't agree. I think C and C++ just gave that style 
of coding a bad rap due to the undefined behavior. But the issue 
is it was undefined behavior. A lot of language features aim to 
make things well defined and have less verbose representations. 
Once a language matures that's what a big portion of their newer 
features become. Less verbose shortcuts of commonly done things. 
I agree it's important that it's well defined, I'm just thinking 
it should be a value that someone actually wants some notable 
fraction of the time. Not something no one wants ever.


I could be persuaded, but so far I'm not drinking the koolaid on 
that. It's not the end of the world, I was just confused when my 
float was NaN.


Re: First Impressions!

2017-11-27 Thread A Guy With an Opinion via Digitalmars-d
On Tuesday, 28 November 2017 at 03:37:26 UTC, rikki cattermole 
wrote:


Its on our TODO list.

Allocators need to come out of experimental and some form of RC 
before we tackle it again.


In the mean time https://github.com/economicmodeling/containers 
is pretty good.


That's good to hear.

I keep saying it, if you don't have unit tests built in, you 
don't care about code quality!




I just like not having to create a throwaway project to test my 
code. It's nice to just use unit tests for what I used to create 
console apps for and then it forever ensures my

code works the same!


You don't need to bother with them for most code :)


That seems to be what people here are saying, but that seems so 
sad...




Doesn't mean the other languages are right either.



That is true, but I'm still unconvinced that making the person's 
program likely to error is better than initializing a number to 
0. Zero is such a fundamental default for so many things. And it 
would be consistent with the other number types.




If you need a wstring, use a wstring!

Be aware Microsoft is alone in thinking that UTF-16 was 
awesome. Everybody else standardized on UTF-8 for Unicode.




I do come from that world, so there is a chance I'm just 
comfortable with it.






First Impressions!

2017-11-27 Thread A Guy With an Opinion via Digitalmars-d

Hi,

I've been using D for a personal project for about two weeks now 
and just thought I'd share my initial impression just in case 
it's useful! I like feedback on things I do, so I just assume 
others do to. Plus my opinion is the best on the internet! You 
will see (hopefully the sarcasm is obvious otherwise I'll just 
appear pompous). It would probably be better if I did a 
retrospective after my project is completed, but with life who 
knows if that will happen. I could lose interest or something and 
not finish it. And then you guys wouldn't know my opinion. I 
can't allow that.


I'll start off by saying I like the overall experience. I come 
from a C# and C++ background with a little bit of C mixed in. For 
the most part though, I work with C#, SQL and web technologies on 
a day to day basis. I did do a three year stint working with 
C/C++ (mostly C++), but I never really enjoyed it much. C++ is 
overly verbose, overly complicated, overly littered with poor 
legacy decisions, and too error prone. C# on the other hand has 
for the most part been a delight. The only problem is I don't 
find it to be the best when it comes to generative programming. 
C# can do some generative programming with it's generics, but for 
the most part it's always struck me as more specialized for 
container types and to do anything remotely outside of it's 
purpose takes a fair bit of cleverness. I'm sick of being clever 
in that aspect.


So here are some impressions good and bad:

+ Porting straight C# seems pretty straight forward. Even some of 
the .NET framework, like files and unicode, have fairly direct 
counterparts in D.


+ D code so far is pushing me towards more "flat" code (for a 
lack of a better way to phrase it) and so far that has helped 
tremendously when it comes to readability. C# kind is the 
opposite. With it's namespace -> class -> method coupled with 
lock, using, etc...you tend to do a lot of nesting. You are 
generally 3 '{' in before any true logic even begins. Then couple 
that with try/catch, IDisposable/using, locking, and then 
if/else, it can get quite chaotic very easily. So right away, I 
saw my C# code actually appear more readable when I translated it 
and I think it has to do with the flatness. I'm not sure if that 
opinion will hold when I delve into 'static if' a little more, 
but so far my uses of it haven't really dampened that opinion.


+ Visual D. It might be that I had poor expectations of it, 
because I read D's tooling was poor on the internet (and nothing 
is ever wrong on the internet), however, the combination of 
Visual D and DMD actually exceeded my expectations. I've been 
quite happy with it. It was relatively easy to set up and worked 
as I would expect it to work. It lets me debug, add breakpoints, 
and does the basic syntax highlighting I would expect. It could 
have a few other features, but for a project that is not 
corporate backed, it was really above what I could have asked for.


+ So far, compiling is fast. And from what I hear it will stay 
fast. A big motivator. The one commercial C++ project I worked on 
was a beast and could take an hour+ to compile if you needed to 
compile something fundamental. C# is fairly fast, so I've grown 
accustomed to not having to go to the bathroom, get a drink, 
etc...before returning to find out I'm on the linking step. I'm 
used to if it doesn't take less than ten seconds (probably less) 
then I prep myself for an error to deal with. I want this to 
remain.


- Some of the errors from DMD are a little strange. I don't want 
to crap on this too much, because for the most part it's fine. 
However occasionally it throws errors I still can't really work 
out why THAT is the error it gave me. Some of you may have saw my 
question in the "Learn" forum about not knowing to use static in 
an embedded class, but the error was the following:


Error: 'this' is only defined in non-static member functions

I'd say the errors so far are above some of the cryptic stuff C++ 
can throw at you (however, I haven't delved that deeply into D 
templates yet, so don't hold me to this yet), but in terms of 
quality I'd put it somewhere between C# and C++ in quality. With 
C# being the ideal.


+ The standard library so far is really good. Nullable worked as 
I thought it should. I just guessed a few of the methods based on 
what I had seen at that point and got it right. So it appears 
consistent and intuitive. I also like the fact I can peek at the 
code and understand it by just reading it. Unlike with C++ where 
I still don't know how some of the stuff is *really* implemented. 
The STL almost seems like it's written in a completely different 
language than the stuff it enables. For instance, I figured out 
how to do packages by seeing it

Re: Fun: Shooting yourself in the foot in D

2016-10-27 Thread A D dev via Digitalmars-d

On Thursday, 27 October 2016 at 19:49:16 UTC, Ali Çehreli wrote:


  
http://www.toodarkpark.org/computers/humor/shoot-self-in-foot.html


Some entries for reference:

C
- You shoot yourself in the foot.
- You shoot yourself in the foot and then nobody else can 
figure out what you did.


C++
- You accidentally create a dozen instances of yourself and 
shoot them all in the foot. Providing emergency medical 
assistance is impossible since you can't tell which are bitwise 
copies and which are just pointing at others and saying, 
"That's me, over there."


Python
- You shoot yourself in the foot and then brag for hours about 
how much more elegantly you did it than if you had been using C 
or (God forbid) Perl.


What would the entry for D be? :)

Ali


1) You pre-shoot the bullet while the gun is being made - for 
efficiency (CTFE). As a result, it misses the foot, which has not 
grown yet.


2) From that link:

http://www.toodarkpark.org/computers/humor/shoot-self-in-foot.html

this one is good:

Concurrent Euclid:

You shoot yourself in somebody else's foot.




What influenced D? - goals, ideas, concepts, language features?

2016-10-11 Thread A D dev via Digitalmars-d

Hi list,

I'm liking D as I keep using it (still new to it), and interested 
in how it evolved, hence this question.


I have seen the Wikipedia article about D:

https://en.wikipedia.org/wiki/D_(programming_language)

which mentions language influences (right sidebar). I've also 
read some other articles on dlang sites, but thought of posting 
this question, to hear interesting stuff from people who know 
more about this.


Thanks to all who reply.




Re: Recommended procedure to upgrade DMD installation

2016-08-06 Thread A D dev via Digitalmars-d

On Saturday, 6 August 2016 at 01:22:51 UTC, Mike Parker wrote:


DMD ships with the OPTLINK linker and uses it by default.



You generally don't need to worry about calling them directly


Got it, thanks.



Re: Recommended procedure to upgrade DMD installation

2016-08-05 Thread A D dev via Digitalmars-d

On Thursday, 4 August 2016 at 22:20:12 UTC, Seb wrote:

OT: 2.071.0 has been released in April - you may check more 
often ;-)




That - 2.071.0 - was actually a typo - I meant to type 2.071.1 - 
which I read (probably on a recent edition of This Week in D) has 
been released recently.


I never use(d) Windows, but I can help you in the way that 
there are no persistent settings. A DMD installation is 
completely independent in it's folder. Hence you can safely 
install the new version in the same directory, but I would make 
at least a copy of the old version first - just in case any 
regression popped in.




The only configuration file is `dmd.conf` (for common flags), a 
user usually doesn't need to modify it, so I would be surprised 
if you did.


Thanks for the answers.




Re: Recommended procedure to upgrade DMD installation

2016-08-05 Thread A D dev via Digitalmars-d

On Friday, 5 August 2016 at 00:07:36 UTC, Mike Parker wrote:
If you're running the installer, it will uninstall the previous 
version for you. If you are doing it manually, I recommend you 
delete the existing folder rather than overwriting it.


Thanks for the info.

If you are compiling 64-bit programs, the installer will find 
the Microsoft tools for you. If you are installing manually, 
you need to edit sc.ini manually with the correct paths, so 
you'll want to back up the existing sc.ini first in that case 
(or if you've made any other edits to it).


What Microsoft tools are you referring to? Do you mean C 
toolchain build tools, like the linker, librarian, (N)MAKE, etc.? 
Does D depend on those on Windows? (I do remember reading a D 
doc, maybe the overview, that said something like that, or at 
least that it plays well with existing common toolchains.)


I don't remember all of the names of the Windows C build tools 
now (plus, they could have changed over time), have to do a bit 
of reading to brush up / update myself on that. I've done C dev 
on Windows (and DOS before it), including command line use of the 
MS compiler and linker (CL.exe, LINK.exe, IIRC), but all that was 
some years ago.


Thanks.



Recommended procedure to upgrade DMD installation

2016-08-04 Thread A D dev via Digitalmars-d

Hi group,

Somewhat new to D.

What is the recommended procedure, if any, to upgrade my DMD 
installation (on Windows, if that makes a difference)?


I.e. If I have 2.70.0 and I saw that 2.17.1 is out now, I can 
look at the list of bug fixes and any new features in the new 
version and decide whether I want to upgrade, or wait for another 
version with more changes that matter to me. That is not what my 
question is about. My question is about:


- should I just install the new version in the same directory, 
over the old version, or do I need to uninstall the old version 
first.


- if I have to first uninstall the old version, are there any 
persistent setting for the compiler that I need to save before 
the uninstall and then restore after the new install?


- any other things I need to take care of?

Thanks.



Re: Pitching an investment bank on using D for their bond analytics

2015-04-14 Thread D Denizen since a year via Digitalmars-d

On Tuesday, 14 April 2015 at 14:21:15 UTC, Benjamin Thaut wrote:


- what are potential factors that might make D a bad choice in 
this
scenario?  I would like to use D certainly - but it is of 
course much
more important that the client gets the best result, however 
it is done.




You would have to asess if the analysis you should create is 
time critical. If it is, a possible pitfall is the GC, and you 
should try to avoid it from the start. Replacing the GC with 
custom memory management is a lot of work if you do it 
afterwards. If the analysis is not time critical, I don't see 
any problems with D.


Kind Regards
Benjamin Thaut


My guess is it's not a major factor, but maybe can design from 
beginning possibility to use custom allocators.  Ie the part 
where latency matters does not deal with monster data sizes, and 
if there is a part where data sizes are large then probably 
latency will be less critical.  (Outside of HFT, big for this 
area is much smaller than what other people mean by big).


Re: Pitching an investment bank on using D for their bond analytics

2015-04-14 Thread D Denizen since a year via Digitalmars-d

On Tuesday, 14 April 2015 at 14:21:26 UTC, Rikki Cattermole wrote:

That too, but I was thinking more along the lines of writing 
some business logic. It is very possible outside of a cli app 
or even web that they would want to use c++ for the user 
interfacing.
The very worse case scenario here is that they get to learn a 
new language + possibly write the business logic faster then 
normal.


After all, business logic probably isn't already designed as 
pseudo code.


Yes - the business logic is where I come in, so it makes sense 
from that perspective, too.


Re: Pitching an investment bank on using D for their bond analytics

2015-04-14 Thread D Denizen since a year via Digitalmars-d

John Colvin:


A couple of big pluses:

1) Ease of changing code. D codebases tend to feel more 
flexible than C++


Thank you - and yes, I agree.


2) Easy to transparently make use of highly optimised low-level 
code in high level constructs, whether that means carefully 
written D, inline asm or calling out to established C(++) 
libraries.


My guess is these guys aren't going to be writing much assembler, 
but everyone cares about performance at some point.




Possible pitfalls:

1) What systems are being targeted? D on obscure big-iron is a 
very different prospect to D on a bunch of x86 linux servers.


I will find out more soon, but I doubt it's old IBM mainframes 
(would guess linux, especially since it's largely a new project).




2) Added maintenance due to language/library changes, however 
minor. Not a particularly big deal IMO, particularly when your 
writing a piece of software for in-house usage.


3) Limited support options. There aren't swathes of consultants 
available at any time to fix your urgent problems.


Yes - I think the support/hiring question will be something of a 
factor.  Let's see.


Re: Pitching an investment bank on using D for their bond analytics

2015-04-14 Thread D Denizen since a year via Digitalmars-d



The best case scenario has happened at least once before in a
similarly-based company.


Hi Iain.

What's best way to reach you by email for quick follow-up on this 
(if you would not mind - no hurry)?


Re: Pitching an investment bank on using D for their bond analytics

2015-04-14 Thread D Denizen since a year via Digitalmars-d

On Tuesday, 14 April 2015 at 13:17:09 UTC, Daniel Murphy wrote:
"D Denizen since a year"  wrote in message 
news:qpharcskwrbfgkuub...@forum.dlang.org...


- am I right in thinking C++ integration more or less works, 
except instantiating C++ templates from D?  what are the 
gotchas?


C++ integration can be made to work, as in data can be passed 
back and forth and most function calls will work.  The gotchas 
are everywhere, and getting non-trivial C++ interop working is 
a fairly advanced task that will require changes on both the 
C++ and D side.  Heavily templated code is especially difficult.


Is there anything anyone has on the gotchas for non-templated 
stuff?  I am sure this would be of more general interest, since 
everybody likely has some legacy code they want to connect with.


I get the impression the subset of C++ people often restrict 
themselves to tends not to favour heavy metaprogramming, but I 
don't know how much template stuff they actually tend to do at 
this place.  (Will find out).


Re: Pitching an investment bank on using D for their bond analytics

2015-04-14 Thread D Denizen since a year via Digitalmars-d

Rikki:

Just a thought, try getting them to use D for prototyping. 
Worse case they will play around with D before settling on e.g. 
c++. Best case scenario they'll move it into production.


Good point.  Pick a small part of the project (especially 
something I 'prepared earlier') and get them to feel comfortable 
first.


Pitching an investment bank on using D for their bond analytics

2015-04-14 Thread D Denizen since a year via Digitalmars-d

Hi.

I have been here a year or so, and trust you will forgive my 
posting pseudonymously on this occasion.  If you guess who it is, 
please be kind enough not to say for now.


A friend has been invited to be a consultant for an investment 
bank that would like to build a set of analytics for fixed income 
products.  The team is currently quite small - about 5 C++ 
developers - and the idea is to start with a proof of concept and 
then build on it as there is further buy-in from the business.


Having been using D for a year or so, I am pretty comfortable 
that it can do the job, and likely much better than the C++ route 
for all the normal reasons.  I haven't experience of using D in a 
proper enterprise environment, but I think this group might be 
open to trying D and that I might be at least part-time involved.


I also have little experience in getting across the merits of 
this technology to people very used to C++, and so I have not yet 
built up a standard set of answers to the normal objections to 
'buying' that will crop up in any situation of this sort.


So I am interested in:

- what are the things to emphasize in building the case for 
trying D?  the most effective factors that persuade people are 
not identical with the technically strongest reasons, because 
often one needs to see it before one gets it.


- what are the likely pitfalls in the early days?

- what are potential factors that might make D a bad choice in 
this scenario?  I would like to use D certainly - but it is of 
course much more important that the client gets the best result, 
however it is done.


- am I right in thinking C++ integration more or less works, 
except instantiating C++ templates from D?  what are the gotchas?


(I appreciate there is not so much to go on, and much depends on 
specific factors).  But any quick thoughts and experiences would 
be very welcome.


Re: tail const ?

2014-10-30 Thread Simon A via Digitalmars-d
I don't know about syntax, but D sure needs first-class support 
of tail immutability.


Take this code for example:

---

immutable class C
{
void foo()
{
//
}
}

void main()
{
auto c = new C();
c.foo();
}

---

Compile it and get

Error: immutable method C.foo is not callable using a mutable 
object


Why?  Because foo() is immutable and demands an immutable this 
reference.  The C instance c is only tail immutable, which 
doesn't really count for anything.  (So, "new C()" instead of 
"new immutable(C)()" is legal but pretty much unusable, it seems.)


But why should *any* function require an immutable reference, as 
opposed to a tail immutable reference?  (Similarly for shared vs 
tail shared.)


Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-23 Thread John A via Digitalmars-d

Next up, running the ./testgl3 on QEMU causes a seg fault.

Ok, it's working:

import derelict.opengl3.gl3;
extern (C) void printf(const char*, ...);
void main()
  {
  printf("Start opengl: \n");
  DerelictGL3.load();
  printf("Start opengl: version %d\n", DerelictGL3.loadedVersion);
  }

prints this in QEMU

  Start opengl:
  Start opengl: version 11

Next, create a triangle.


Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-22 Thread John A via Digitalmars-d

On Thursday, 23 October 2014 at 04:12:45 UTC, John A wrote:

What am I missing?


To answer my own question, I put the source file in the wrong 
spot:


BAD:
$(ARM-GDC) $(INCDIR) -L$(LIBDIR) -lopengl3 -lutil -ldl testgl3.d 
-o testgl3


Good:
$(ARM-GDC) $(INCDIR) testgl3.d -L$(LIBDIR) -lopengl3 -lutil -ldl 
-o testgl3


Note: I had to add the -ldl because of unresolved dlopen, 
dlclose, etc. symbols.


Next up, running the ./testgl3 on QEMU causes a seg fault.


Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-22 Thread John A via Digitalmars-d
On Wednesday, 22 October 2014 at 07:49:24 UTC, Johannes Pfau 
wrote:
You also have to link against DerelictGL3 and maybe 
DerelictUtil:

-lDerelictGL3 -lDerelictUtil


Sorry for the confusion.
When I compile DerelictGL3 and DerelictUtil, I create archives 
libopengl3.a and libutil.a in a local 'lib' directory.


And when I add to the compilation line:
-L./lib -lopengl3 -lutil

I get the same link errors:

$ make testgl3
/home/arrizza/x-tools/arm-cortexa9_neon-linux-gnueabihf/bin/arm-cortexa9_neon-linux-gnueabihf-gdc 
-I./DerelictUtil/source -I./DerelictGL3/source -Llib -lopengl3 
-lutil testgl3.d -o opengl3

/tmp/ccLpFyC8.o: In function `_Dmain':
testgl3.d:(.text+0x40): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
testgl3.d:(.text+0x44): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
testgl3.d:(.text+0x58): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
testgl3.d:(.text+0x5c): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
/tmp/ccLpFyC8.o:(.data+0xc): undefined reference to 
`_D8derelict7opengl33gl312__ModuleInfoZ'

collect2: error: ld returned 1 exit status
make: *** [testgl3] Error 1

Now here's the strange part. If I dump the symbols in the 
libopengl3.a's, the name above exists:
$ nm libopengl3.a | grep 
_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader
 B 
_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader


What am I missing?



Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-21 Thread John A via Digitalmars-d

Next step for me, is to create a small opengl test app:
- does it cross-compile (on Ubunutu)?
- does it run in QEMU?
- does it run on a Pandaboard?


Here's the result:
DerelictGL3 and DerelictUtil cross-compiles ok with no gdc-arm 
failures.


On the other hand, a very simple app has linker errors:

import derelict.opengl3.gl3;
void main()
  {
  // DerelictGL3.load();
  // DerelictGL3.reload();
  printf("Start opengl: version %s\n", DerelictGL3.loadedVersion);
  }

I'm using -nophoboslib so there are linker errors related to 
phobos. If I don't use -nophoboslib I still get errors:


make testgl3
/home/arrizza/x-tools/arm-cortexa9_neon-linux-gnueabihf/bin/arm-cortexa9_neon-linux-gnueabihf-gdc 
-I./DerelictUtil/source -I./DerelictGL3/source -Llib  testgl3.d 
-o opengl3

/tmp/ccSSbl9j.o: In function `_Dmain':
testgl3.d:(.text+0x40): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
testgl3.d:(.text+0x44): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
testgl3.d:(.text+0x58): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
testgl3.d:(.text+0x5c): undefined reference to 
`_D8derelict7opengl33gl311DerelictGL3C8derelict7opengl33gl317DerelictGL3Loader'
/tmp/ccSSbl9j.o:(.data+0xc): undefined reference to 
`_D8derelict7opengl33gl312__ModuleInfoZ'

collect2: error: ld returned 1 exit status
make: *** [testgl3] Error 1


I was stuck, so I started on a different front. I created a 
variant of the framebuffer test (in C for now) that sets every 
pixel in the framebuffer while I kept track of the time. The 
results are interesting. In QEMU:

 ./fbrate
 time 114 11ms 90 fps

on the Pandaboard:
 ./fbrate
 time 4 4ms 250 fps

(that's with -03 btw).

The idea is that I can now convert this to D and see what the 
frame rate is then.


But before I did, I tried to see if mouse events could be 
captured in a D app. I found out the QEMU distro has mev in it 
and so that was an even simpler first step. If that works, I 
would then find the (open) source code for mev and convert that 
to D. Unfortunately, it didn't work. The mouse capture in my 
version of QEMU is not working like I expect and so I don't see 
any mouse events coming out of QEMU. Haven't tried mev on the 
Pandaboard yet.


Any advice?




Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-12 Thread John A via Digitalmars-d

Fixed in master. Tag 1.0.10 for those using DUB.


Just did a git pull and recompile. It built clean. Mike thanks 
for the very quick response!


Next step for me, is to create a small opengl test app:
- does it cross-compile (on Ubunutu)?
- does it run in QEMU?
- does it run on a Pandaboard?

After that, I'll try the derelict gflw and see how that goes...


John


Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-12 Thread John A via Digitalmars-d
At the last time I tested, all unittests and testsuite had been 
passing on ARM.


Very nice. I'd like to replicate what you did. I haven't looked 
at Phobos at all yet, I'm working on getting opengl up and going 
first.


Any chance you can post instructions on exactly how you did that? 
Or point me to a url with those instructions? That would be 
fantastic!


Example of some things I think are relevant:
- I'm assuming gdc or did you use ldc instead? Which version of 
gdc+gcc?
- Was Phobos cross-compiled or did you compile on the devkit 
itself?
- If cross-compiled, for which ARM architecture (cortex a8, a9, 
M3, other)?
- What was the Host environment and it's configuration (linux vs 
Win)? If Linux, which version and did you have to install 
additional packages?

- Which repository and revision of Phobos did you use?
- Did you make any changes to the Phobos source to get it to 
compile?
- What were the exact command lines you used to compile Phobos? 
Can you supply a Makefile?
- What environment and configuration did you use to run the tests 
(Bare-metal, linux, qemu, etc.)? If not qemu, which devkit/board 
did you use (beagleboard, pandaboard, Raspberry PI, etc.)?

- What was the OS and configuration of the devkit?
- Did you have to update or install any additional libraries (.a 
or .so) onto the board?


Thanks in advance!
John


Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-12 Thread John A via Digitalmars-d
I would recommended using the packages from DerelictOrg [1] 
rather than Derelict 3, as the latter is no longer maintained.


I did get Derelict3 to compile. I had to use a Makefile instead 
of dub but all the files compiled cleanly with no warnings. I did 
not add any unit tests as of yet, so I don't know if anything 
runs.


I took your recommendation and cloned from DerelictOrg. It failed 
to compile in a couple of files: arb.d and ext.d. All the 
failures were due to the same error, e.g.:


source/derelict/opengl3/arb.d:933: error: user defined attributes 
cannot appear as postfixes


The original line in Derelict3 is:
  private __gshared bool _ARB_depth_buffer_float;
  bool ARB_depth_buffer_float() @property { return 
_ARB_depth_buffer_float; }


The line in DerelictOrg is:
  private __gshared bool _ARB_depth_buffer_float;
  bool ARB_depth_buffer_float() @nogc nothrow @property { return 
_ARB_depth_buffer_float; }


Re: GDC & Pandaboard/QEMU & Framebuffer

2014-10-10 Thread John A via Digitalmars-d


Nice, I have ldc working at pretty much the same level on 
linux/arm, tested on a Cubieboard2.  Have you tried running any 
of the druntime/phobos unit tests?

No, it's on my todo list.

One thing I'll do first is see if I can modify Derelict3 to be 
Phobos free. If so the opengl3 library will probably be more 
useful to me. There might even be a mid-point where only a 
portion of Phobos needs porting.


All but two modules of druntime, as of the dmd 2.066 release, 
pass for me, while only three modules pass from phobos.  I'll 
put up instructions for cross-compiling with ldc once I have 
more of it working, as cross-compiling with ldc is easier.


Cool. All useful work in the end...




Re: struct and default constructor

2014-10-10 Thread Simon A via Digitalmars-d

On Friday, 10 October 2014 at 09:58:54 UTC, Walter Bright wrote:

On 11/27/2011 11:53 AM, deadalnix wrote:
I wonder why struct can't have a default constructor. TDPL 
state that it is

required to allow every types to have a constant .init .


Having a .init instead of a default constructor has all kinds 
of useful properties:


1. the default object is trivially constructable and cannot fail

2. an easy way for other constructors to not have overlooked 
field initializers, so they get garbage initialized like in C++


3. generic code can rely on the existence of trivial 
construction that cannot fail


4. dummy objects can be created, useful for "does this compile" 
semantics


5. an object can be "destroyed" by overwriting it with .init 
(overwriting with 0 is not the same thing)


6. when you see:
T t;
in code, you know it is trivially constructed and will succeed

7. objects can be created using TypeInfo


Default constructors are baked into C++. I can't escape the 
impression that the desire for D default constructors comes 
from more or less trying to write C++ style code in D.


I feel that non-trivial default construction is a bad idea, as 
are the various methods people try to get around the 
restriction. For non-trivial construction, one can easily just 
make a constructor with an argument, or call a factory method 
that returns a constructed object.


I've often thought that most of the hidden pain of arbitrary
constructors in C++ could be avoided if C++ could take advantage
of functional purity.

D has native functional purity.  Couldn't you get the same
benefits that you listed by allowing default constructors but
requiring them to be pure?  Of course, you can get the same
outcome by initialising members using static pure functions or
various other helpers, so you could say that pure default
constructors are just syntactic sugar.

Pure default constructors do have some advantages for more
complex construction, though.  For the sake of example, say I
have a struct that uses a table of complex numbers, and for
unrelated reasons the real and imaginary parts are stored in
separate arrays.  I.e., I have to initialise multiple members
using one calculation.  Adding pure default constructors to D
would allow this to be implemented more cleanly and more
intuitively.


GDC & Pandaboard/QEMU & Framebuffer

2014-10-05 Thread John A via Digitalmars-d

Not sure where to post this; it's not a question, just some info.

I have created a set of pages here:

  http://arrizza.org/wiki/index.php/Dlang

with instructions which include:
  - how to create a GDC cross-compiler using crosstool-ng
  - how to create some sample applications to test out GDC
  - how to create a sample app that writes to the framebuffer via 
GDC

  - how to set up and run these apps on QEMU
  - how to run the same apps on a Pandaboard

There is nothing new here. Others have written it already, some 
of which worked as advertised for me, some didn't. I've just 
gathered it up, tried it out and wrote down very specific 
instructions about what I needed to do to get it to work. 
Hopefully it works well for you.


Note I use Ubuntu 12.04 for everything, so these pages won't help 
Windows folks.


John


Re: std.logger

2013-09-17 Thread Jose A Garcia Sancio

On Friday, 23 August 2013 at 20:11:38 UTC, H. S. Teoh wrote:
On Fri, Aug 23, 2013 at 12:41:45PM -0700, Andrei Alexandrescu 
wrote:

[...]

(Just hanging this to a random comment in this thread.) I think
there's some pretty good work on logging by myself and another
poster (Jose?) that has since gone abandoned. It included some 
nice

approach to conditional logging and had both compile-time and
run-time configurability.

[...]

Where's the code?


T


The code is here: 
https://github.com/jsancio/log.d/blob/master/src/log.d


For what it is worth, there was a lot of thought and effort that 
was put into the design, implementation and testing of it. If you 
are interested in the design goals of the std.log library then my 
recommendation is to learn about glog 
(http://google-glog.googlecode.com/svn/trunk/doc/glog.html). If 
the community decides that they like those design principles then 
I highly recommend you continue where std.log left off.


Thanks,
-Jose


Re: D game development: a call to action

2013-08-05 Thread Jonathan A Dunlap

Take a look at https://github.com/Jebbs/DSFML
The autor, aubade, me and few other guys are trying to maintain 
the project. I wrote a small tutorial on the wiki. And we have 
plans on documenting it, adding unittests and much more.


Great stuff, Borislav. The basic tutorial 
https://github.com/Jebbs/DSFML/wiki/Short-example helps a lot to 
grok how to get your feet wet. I'm looking forward to seeing more 
tutorials like this.


Re: D game development: a call to action

2013-08-05 Thread Jonathan A Dunlap
C) Derelict3 is basically just bindings over the C 
functions--you should
use any and all documentation, references, and tutorials 
against the C

libraries.


Thanks Justin for the response. Derelict3 does seem to be the 
library with the most activity by far. However, it's simply not 
good enough to just say that since these are just bindings that 
the documentation/examples in the host language is adequate. This 
is true under the assumption that: the developer already is 
comfortable in the D language to know the usage difference, are 
familiar with the lib API to know when these differences apply, 
or even know to translate dependent idioms outside the actual 
library usage (like loading an image from disk, to apply to a 
texture, may be a separate but related learned task to the 
library). If we are looking for a broader adoption of D, we 
should be in the mindset of someone who may not even have a 
background in computer science here.


D game development: a call to action

2013-08-05 Thread Jonathan A Dunlap
I am one of the few who have taken a keen interest in D for game 
development. The concise language and modern conveniences may be 
able to reduce many hours worth of development time off a game 
project, while making the code more maintainable. Aside from the 
core language, Dlang graphics bindings have a way to go before 
even earning acceptance in the indie gaming scene, but it's 
making great strides to get there.


The main challenge I've hit is the lack of any sort of path for 
adopting a media engine. Bindings (like SFML 
https://github.com/krzat/SFML-D) all suffer from:


A) No information about its current status: this is scary if its 
repo hasn't been updated in months.
B) Usually are complex to get working in a new project (manually 
building or searching for undocumented dependency DLLs)

C) Lack practical references and tutorials for "real would usage"
e.g. "how to create an OpenGL window" or "how to render a 
triangle"
versus something like "how to load from disk an image texture 
onto a quad and move it around using keyboard events"


SFML bindings are also in https://github.com/aldacron/Derelict3 
but I couldn't find a scrap of information on how to use it, how 
to compile correctly, or example usage. It's unclear if the 
library is even usable in its current state.


Please don't take this as blind criticism, but rather a plea to 
action for the community to provide better library documentation 
support for: current lib status, getting started adding it, and a 
general use tutorial/example. If we start doing this, it'll make 
a big impact for other game developers who are new to Dlang to 
adopt the language. Thanks for listening!


Re: Low-overhead components

2013-07-29 Thread Jonathan A Dunlap
Fixed, thanks. I wasn't sure if the post was ready for Reddit, 
as it's mainly some thoughts written down for advanced D users.


Thanks for writing this. Aside from the primary purpose of the 
article, I was able to learn along the way more about D's mixins 
because of it.


As a game architect, I love discovering ways to further abstract, 
reuse, improve speed, and aid readability... all stuff that 
mixins can help with. ;)


Re: Flame bait: D vs. Rust vs. Go Benchmarking

2013-07-29 Thread Jonathan A Dunlap
Blaming person X for that, famous or not, would be a ridiculous 
shifting

of responsibilities

==
That said, it is also clear that in any organization, 
attitudes, tone and style flow from the top down. (It's amazing 
how pervasive this is.)


Totally agree, I didn't mention blaming. Of course, everyone is 
free to express themselves and how others replicate their actions 
cannot be controlled. However fame, just like any power, should 
be responsibly used, but it's a personal choice as long as it 
doesn't cross unacceptable boundaries (e.g. personal attacks). 
I'm with Walter that ideally rules shouldn't be established as 
the natural maturing of a community lends itself to becoming 
stronger because of it.


Basically:
a) Don't promote or feed the fire of bad behavior, best to show 
by example
b) Remind/educate others of their influence when their actions 
are negatively affecting others... however don't command as it's 
their prerogative


Re: Flame bait: D vs. Rust vs. Go Benchmarking

2013-07-29 Thread Jonathan A Dunlap
That could give the impression that Linus frequently /uses 
obscenity/

as a /method/, which would be very, very misleading.


Zed Shaw also falls into this category. He is usually polite and 
civil during debates. However like Linus, he does sometimes throw 
around obscenity to express a particular point... a trait that 
other people try to emulated thinking its cool to use as a 
default method of asserting a point. Simply I believe the people 
who have the most respect or fame in the industry need to be the 
most careful about their expression. Just like any parent to 
child relationship, be aware that others may emulate your 
behavior and cross boundaries where you had carefully walked the 
line.


Are we getting better at designing programming languages?

2013-07-23 Thread Jonathan A Dunlap

Check it out: D is listed rather favorably on the chart..
http://redmonk.com/dberkholz/2013/07/23/are-we-getting-better-at-designing-programming-languages/


Re: Proof that D sucks!

2013-07-23 Thread Jonathan A Dunlap

O rly?? OMG D sux @ GC!


Are you mocking me? I've since totally made my peace with D's GC 
(especially now that I know how to explicitly unallocated refs). 
hehe




Re: [OT] Why mobile web apps are slow

2013-07-16 Thread Jonathan A Dunlap

You can also make use of library types for reference counting,

http://dlang.org/phobos/std_typecons.html#.RefCounted

And if you really, really need, also manual memory management 
by calling the C functions and letting the GC know not to track 
that memory.


http://dlang.org/phobos/core_memory.html#.GC.BlkAttr.NO_SCAN


Fascinating! I assume the Phobos RefCounted then avoids using the 
GC by utilizing GC.malloc (NO_SCAN) behind the scenes? Has anyone 
benchmarked an application using D's GC versus using RefCounted 
for critical paths?




Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Jonathan A Dunlap
My 2cents: for D to be successful for the game development 
community, it has to be possible to mostly sidestep the GC or opt 
into a minimal one like ARC. Granted, this is a bit premature 
considering that OpenGL library support is still in alpha quality.


Re: Rust moving away from GC into reference counting

2013-07-10 Thread Jonathan A Dunlap
Interesting read on the subject of ARC and GC: 
http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/


It does seem that ARC would be a preferred strategy for D to 
pursue (even if it's a first pass before resorting to GC sweeps).


Re: Rust moving away from GC into reference counting

2013-07-10 Thread Jonathan A Dunlap

Opps, just saw that this link was already posted here:
http://forum.dlang.org/thread/krhjq8$2r8l$1...@digitalmars.com


Re: this is almost a workaround for the lack of named parameters

2013-03-24 Thread a a
> I'm using Thunderbird and just using "Reply to the message".


using gmail but doesn't work... (testing again via this message...) pls
ignore if this succeeds!


Re: Multiple return values...

2012-03-09 Thread a
I'm finding HEAPS of SIMD functions want to return pairs 
(unpacks in

particular): int4 (low, hight) = unpack(someShort8);
Currently I have to duplicate everyting: int4 low =
unpackLow(someShort8); int4 high = unpackHigh(someShort8);
I'm getting really sick of that, it feels so... last millennium.


It can also be realy inefficient. For example ARM NEON has vzip 
instruction that is used like this:


vzip.32 q0, q1

This will interleave elements of vectors in q0 and q1 in one 
instruction.


Re: std.collection lets rename it into std,ridiculous.

2012-02-20 Thread a
But if you only access it via push() and pop(), then there are 
no other references to the stack, so why should the GC 
reallocate it on append?


Because the GC doesn't know there aren't any other references to 
it unless you tell it with assumeSafeAppend.




Re: std.collection lets rename it into std,ridiculous.

2012-02-20 Thread a

auto a = [1,2,3,4,5];
auto b = a[0..3];
assumeSafeAppend(b);
b ~= 0;
writeln(a);

prints [1, 2, 3, 0, 5], so b is not reallocated in this case.


Just to clarify: this is not guaranteed to print [1, 2, 3, 0, 5], 
it only does
if there is still enough memory in a block allocated to b and it 
doesn't have

to be reallocated.


Re: std.collection lets rename it into std,ridiculous.

2012-02-20 Thread a
Nope, it doesn't. That's the power of D slices. When you create 
a
dynamic array, the GC allocates a certain amount of memory, 
let's call
it a "page", and sets the array to point to the beginning of 
this
memory. When you append to this array, it simply uses up more 
of this
memory. No reallocation actually occurs until the page is used 
up, at
which point the GC allocates a larger page to store the array 
in.


Yes, it does. When you take a slice of an array and then append
an element to it, the slice has to be reallocated or the element
in the original array after the end of the slice would be
modified. For example:

auto a = [1,2,3,4,5];
auto b = a[0..3];
b ~= 0;
writeln(a);

If b was not reallocated, the code above would print [1, 2, 3, 0, 
5],
which is probably not what one would expect. This would cause 
problems
if a function took array as a parameter and then appended to it. 
If

you passed an array slice to such function you would have a
non-deterministic bug - when the appending would cause 
reallocation,

everything would be okay, but when it wouldn't, the function would
modifiy the array outside of the slice that was passed to it. 
Arrays

in D used to work like that IIRC. Someone in this thread mentioned
that you should use assumeSafeAppend for stack implementation (I 
did

not know about assumeSafeAppend before that). The following code:

auto a = [1,2,3,4,5];
auto b = a[0..3];
assumeSafeAppend(b);
b ~= 0;
writeln(a);

prints [1, 2, 3, 0, 5], so b is not reallocated in this case.


Re: std.collection lets rename it into std,ridiculous.

2012-02-20 Thread a

On Monday, 20 February 2012 at 10:44:03 UTC, Derek wrote:
On Mon, 20 Feb 2012 14:05:07 +1100, Andrei Alexandrescu 
 wrote:



We need to do the right thing.


Do we? Really?

That takes sooo lng it encourages many 
home-baked-reinvented-wheels that might very well end up taking 
more time and effort from the community than incrementally 
creating a standard set of containers.  Does it really matter 
if our initial efforts for some of the expected data structures 
are less than 'The Right Thing'? If they meet 80%+ of our 
requirements and continue to evolve towards perfection, I'm 
pretty sure the community could cope with a NQR product for now.


You could use dcollections ( 
http://www.dsource.org/projects/dcollections ) if you need 
something now.


Re: std.collection lets rename it into std,ridiculous.

2012-02-20 Thread a

here's a
stack:

T[] stack;
void push(elem) {
stack ~= elem;
}
T pop() {
T elem = stack[$-1];
stack = stack[0..$-1];
return elem;
}


Won't this reallocate every time you pop an element and then push 
a new one?





Re: [your code here]

2012-02-17 Thread a



Assuming that by "any" you mean "any particular", you would 
have to read all the lines first. Otherwise, if the code 
selects the first line with probability 1/K, then I can just 
input some other number of lines.


I'm not sure if I understood your post correctly, but it seems to 
me that you didn't read the code carefully. The code selects the 
last read line with probability 1/n, where n is the number of 
strings already read, not the number of all strings. Say you have 
have already read n strings and they are all selected with 
probability 1/n. Now you read a new string and select it with 
probability  1 / (n+1). The probability that you didn't select 
the new string is then n / (n + 1) and the first n strings are 
selected with probability 1 / n * n / (n + 1) = 1 / (n + 1), so 
you now have n+1 lines all selected with probability 1 / (n + 1). 
So if the property that input strings are all selected with equal 
probability holds for n, it must also hold for n+1. Since it 
obviously holds for n = 1 (we just select the only string with 
probability 1 in that case), it must hold for all n.




Re: D Compiler port Android

2012-02-16 Thread a
On Thursday, 16 February 2012 at 19:30:43 UTC, Luís Carlos 
Moreira da Costa wrote:


Dear friends

I wonder if someone is carrying the D for Android?

For I am developing a database language embedded in D, and 
would like to place on Android OS...


Cheers,

Luís Carlos Moreira da Costa
Eclipse RAP, RCP, eRCP, GEF, GMF, OSGi, Virgo, Gemini, Spring, 
Scala, Fantom, Haskell, Qt and D (Digital Mars)

Opal Committer
Eclipse Regional Communities/Brazil
http://wiki.eclipse.org/Regional_Communities/Brazil
http://eclipsebrazil.wordpress.com
http://www.osgi.org/About/Members


There is a thread in D.gnu with instructions on how to build GDC 
for android:

http://forum.dlang.org/thread/20120204203109.26c9a80b@jpf-laptop
The compiler works, but the druntime currently doesn't. You can 
get programs to work as long as they don't use anything that 
needs druntime support, such as the GC or Phobos.


Re: std.simd module

2012-02-06 Thread a

used like this (a* are inputs and r* are outputs):

transpose(aX, aY, aZ, aW, rX, rY, rZ, rW);



... the problem is, without multiple return values (come on, D 
should have

multiple return values!), how do you return the result? :)


Maybe those functions could be used to implement the functions 
that take

and return structs.



Yes... I've been pondering how to do this properly for ages 
actually.
That's the main reason I haven't fleshed out any matrix 
functions yet; I'm

still not at all sold on how to represent the matrices.
Ideally, there should not be any memory access. But even if 
they pass by
ref/pointer, as soon as the function is inlined, the memory 
access will

disappear, and it'll effectively generate the same code...


I meant having functions that would return through reference 
parameters. The transpose function above would have signature 
transpose(float4, float4, float4, float4, ref float4, ref float4, 
ref float4, ref float4).


Sure. I wasn't sure how useful they were in practise... I 
didn't want to
load it with countless silly permutation routines so I figured 
I'll add
them by request, or as they are proven useful in real world 
apps.
What would you typically do with the interleave functions at a 
high level?
Sure you don't just use it as a component behind a few actually 
useful

functions which should be exposed instead?


I think they would be useful when you work with arrays of structs 
with two elements such as complex numbers. For example to 
calculate a square of a complex array you could do:


for(size_t i=0; i < a.length; i += 2)
{
   float4 first = a[i];
   float4 second  = a[i + 1];
   float4 re = deinterleaveLow(first, second);
   float4 im = deinterleaveHigh(first, second);
   flaot4 re2 = re * re - im * im;
   float4 im2 = re * im
   im2 += im2;
   a[i] = interleaveLow(re2, im2);
   a[i + 1] = interleaveHigh(re2, im2);   }

Interleave and interleave can also be useful when you want to 
shuffle data in some custom way. You can't cover all possible 
permutations of elements over multiple vectors in a library 
(unless you do something like
A* search at compile time and generate code based on that - but 
that would probably be way to slow), but you can expose at least 
the capabilities that are common to most platforms, such as 
interleave and deinterleave.





Re: std.simd module

2012-02-06 Thread a
There's a thread in d.gnu with Linux and MinGW cross compiler 
binaries.


I didn't know that, thanks.



Re: std.simd module

2012-02-06 Thread a
True, I have only been working in x86 GDC so far, but I just 
wanted to get

feedback about my approach and API design at this point.
It seems there are no serious objections, I'll continue as is.


I have one proposal about API design of matrix operations. Maybe 
there could be functions that would take row vectors as 
parameters in addition to those that take matrix structs. That 
way one could call matrix functions on data that isn't stored as 
matrix structures without copying. So for example for the 
transpose function there would also be a function that would be 
used like this (a* are inputs and r* are outputs):


transpose(aX, aY, aZ, aW, rX, rY, rZ, rW);

Maybe those functions could be used to implement the functions 
that take and return structs.


I also think that interleave and deinterleave operations would be 
useful. For four element float vectors those can be implemented 
with only one instruction at least for SSE (using unpcklps, 
unpckhps and shufps) and  NEON (using vuzp and vzip).



I have an
ARM compiler too now, so I'll be implementing/testing against 
that as

reference also.


Could you please tell me how did you get the ARM compiler to work?


Re: std.simd module

2012-02-06 Thread a

On Saturday, 4 February 2012 at 23:15:17 UTC, Manu wrote:

First criticism I expect is for many to insist on a class-style 
vector
library, which I personally think has no place as a low level, 
portable API.
Everyone has a different idea of what the perfect vector lib 
should look
like, and it tends to change significantly with respect to its 
application.


I feel this flat API is easier to implement, maintain, and 
understand, and
I expect the most common use of this lib will be in the back 
end of peoples

own vector/matrix/linear algebra libs that suit their apps.

My key concern is with my function names... should I be worried 
about name
collisions in such a low level lib? I already shadow a lot of 
standard

float functions...
I prefer them abbreviated in this (fairly standard) way, keeps 
lines of
code short and compact. It should be particularly familiar to 
anyone who

has written shaders and such.


I prefer the flat API and short names too.


Opinions? Shall I continue as planned?


Looks nice. Please do continue :)

You have only run this on a 32 bit machine, right? Cause I tried 
to compile this simple example and got some errors about 
converting ulong to int:


auto testfun(float4 a, float4 b)
{
   return swizzle!("yxwz")(a);
}

It compiles if I do this changes:

566c566
<foreach(i; 0..N)
---

foreach(int i; 0..N)

574c574
<int i = countUntil(s, swizzleKey[0]);
---

int i = cast(int)countUntil(s, swizzleKey[0]);

591c591
<foreach(j, c; s) // find the offset of the 
---

foreach(int j, c; s) // find the offset 
of the






Re: [xmlp] the recent garbage collector performance improvements

2012-02-02 Thread a

But the day may come when your
smartphone *can* literally have a 22" monitor (that folds into 
a pocket

sized card).


T


This doesn't sound very practical.




Re: [xmlp] the recent garbage collector performance improvements

2012-02-01 Thread a
It looks like someone wrote it a year ago, but it was never added 
to phobos: 
http://www.digitalmars.com/d/archives/digitalmars/D/announce/std.xml2_candidate_19804.html 
.


On Thursday, 2 February 2012 at 04:39:11 UTC, dsimcha wrote:
Wait a minute, since when do we even have a std.xml2?  I've 
never heard of it and it's not in the Phobos source tree (I 
just checked).


On Thursday, 2 February 2012 at 00:41:31 UTC, Richard Webb 
wrote:

On 01/02/2012 19:35, dsimcha wrote:

I'd be very
interested if you could make a small, self-contained test 
program to use

as a benchmark.




The 'test' is just

/
import std.xml2;

void main()
{
 string xmlPath = r"test.xml";

 auto document = DocumentBuilder.LoadFile(xmlPath, false, 
false);

}

/

It's xmlp that does all the work (and takes all the time).


I'll see about generating a simple test file, but basically:

5 top level nodes
each one has 6 child nodes
each node has a single attribute, and the child nodes each 
have a short text value.



Parsing the file with DMD 2.057 takes ~25 seconds

Parsing the file with DMD 2.058(Git) takes ~6.1 seconds

Parsing the file with DMD 2.058, with the GC disabled during 
the LoadFile call, takes ~2.2 seconds.



For comparison, MSXML6 takes 1.6 seconds to load the same file.




Re: indent style for D

2012-01-29 Thread a

On Sunday, 29 January 2012 at 14:42:16 UTC, Daniel Murphy wrote:

http://media.comicvine.com/uploads/6/67698/1595201-oh_look_its_this_thread_again_super.jpg


That's nothing. If you want to see a truly great bikeshedding 
thread, go there:

http://www.reddit.com/r/programming/comments/p1j1c/tabs_vs_spaces_vs_both/


Re: Alternative template instantiation syntax

2012-01-27 Thread a

On Friday, 27 January 2012 at 22:02:55 UTC, F i L wrote:

Timon Gehr wrote:

alias MyFoo = Foo!q{

}


This syntax makes a lot of sense, but why have the '=' at all? 
Why not just:


alias Num float;
alias MyFoo Foo!q{

};


I guess one reason is that it fits better with the rest of the 
language, it feels more natural. Also, wouldn't your proposal 
require making the current syntax illegal and breaking pretty 
much all of existing D code?


Re: FFT in D (using SIMD) and benchmarks

2012-01-25 Thread a

On Wednesday, 25 January 2012 at 18:36:35 UTC, Manu wrote:

Can you paste disassembly's of the GDC code and the G++ code?
I imagine there's something trivial in the scheduler that GDC 
has missed.


Like the other day I noticed GDC was unnecessarily generating a 
stack frame

for leaf functions, which Iain already fixed.


I've uploaded it here:

https://github.com/downloads/jerro/pfft/disassembly.zip

For g++ there is the disassembly of the entire object file. For 
gdc that included parts of the standard library and was about 
170k lines so I coppied just the functions that took the most 
time. I have also included percentages of time taken by functions 
for "./bench 22". I noticed that 2% of time is taken by memset. 
This could be caused by static array initialization in 
fft_passes_stride, so I'll void initialize static arrays there.


For very small sizes the code compiled with g++ is probably 
faster because it's more aggressively inlined.


I'd also be interested to try out my experimental std.simd 
(portable)
library in the context of your FFT... might give that a shot, I 
think it'll

work well.


For that you probably only need to replace the code in sse.d 
(create stdsimd.d or something). The type that you pass as a 
first parameter for the FFT template should define at least the 
following:


- an alias "vec" for the vector type
- an alias "T" for the scalar type
- an enum vec_size which is the number of scalars in a vector - a 
function "static vec scalar_to_vector(T)", which takes a scalar 
and returns

a vector with all elements set to that scalar
- a function "static void bit_reverse_swap_16(T * p0, T * p1, T * 
p2, T * p3, size_t i1, size_t i2)"
- a function "static void bit_reverse_16(T * p0, T * p1, T * p2, 
T * p3, size_t i)"


You can start with using Scalar!T.bit_reverse_16 and 
Scalar!T.bit_reverse_swap_16 from fft_impl.d but that won't be 
very fast. Those to funcions do the following:


bit_reverse_16: This one reads 16 elements - four from p0 + i, 
four from p1 + i, four from p2 + i and four from p3 + i. Then it 
"bit reverses" them - this means that for each pair of indices j 
and k such that k has the reverse
bits of j, it swaps element at j and element at k. I define the 
index here so that the element that was read from address pn + i 
+ m has index n * 4 + m.
After bit reversing them, it writes the elements back. You cann 
assume that
(p0 + i), (p1 + i), (p2 + i) and (p3 + i) are all aligned to 
4*T.sizeof.


bit_reverse_swap_16: This one reads 16 elements from  (p0 + i), 
(p1 + i), (p2 + i) and (p3 + i)
and bitreverses them and also the elements from (p0 + j), (p1 + 
j), (p2 + j) and (p3 + j)
and bitreverses those. Then it writes the elements that were read 
from offsets i to offsets j and vice versa. You can assue that 
(p0 + i), (p1 + i), (p2 + i) and (p3 + i),
(p0 + j), (p1 + j), (p2 + j) and (p3 + j) are all aligned to 
4*T.sizeof.


If you want to speed up the fft for larg sizes (greater or equal 
to 1<in L1 cache) a bit, you can also write those functions:


- static void interleave(int n)(vec a, vec b, ref vec c, ref vec 
d)
Here n will be a power of two larger than 1 and less or eaqual to 
vec_size.
The function breaks vector a into n equaly sized parts and does 
the same
for b, then interleaves those parts and writes them to c,d. For 
example:
for vec_size = 4 and n = 4 it should write (a0, b0, a1, b1) to c 
and (a2, b2, a3, b3) to d. For vec_size = 4 and n = 2 it should 
write (a0, a1, b0, b1) to c and (a2,a3,b2,b3) to d.


- static void deinterleave(int n)(vec a, vec b, ref vec c, ref 
vec d)

This is an inverse of interleave!n(a, b, c, d)

- static void complex_array_to_real_imag_vec(int n)(float * arr, 
ref vec rr, ref vec ri)
This one reads n pairs from arr. It repeats the first element of 
each pair
vec_size / n times and writes them to rr. It also repeats the 
second element
of each pair vec_size / n times and writes them to ri. For 
example: if vec_size
is 4 and n is 2 then the it should write (arr[0], arr[0], arr[2], 
arr[2])
to rr and (arr[1], arr[1], arr[3], arr[3]) to ri. You can assume 
that arr

is aligned to 2*n*T.sizeof.

The functions interleave, deinterleave and 
complex_array_to_real_imag_vec
are called in the inner loop, so if you can't make them very 
fast, you should
just ommit them. The algorithm will then fall back (it checks for 
the presence
of interleave!2) to a scalar one for the passes where these 
functions would be used otherwise.




Re: FFT in D (using SIMD) and benchmarks

2012-01-25 Thread a

On Wednesday, 25 January 2012 at 00:49:15 UTC, bearophile wrote:

a:

Because dmd currently doesn't have an intrinsic for the SHUFPS 
instruction I've included a version block with some GDC 
specific code (this gave me a speedup of up to 80%).


It seems an instruction worth having in dmd too.



Chart: http://cloud.github.com/downloads/jerro/pfft/image.png


I know your code is relatively simple, so it's not meant to be 
the fastest on the ground, but in your nice graph _as reference 
point_ I'd like to see a line for the FTTW too. Such line is 
able to show us how close or how far all this is from an 
industry standard performance.
(And if possible I'd like to see two lines for the LDC2 
compiler too.)


Bye,
bearophile


I have updated the graph now.


Re: FFT in D (using SIMD) and benchmarks

2012-01-24 Thread a

On Wednesday, 25 January 2012 at 00:49:15 UTC, bearophile wrote:

a:

Because dmd currently doesn't have an intrinsic for the SHUFPS 
instruction I've included a version block with some GDC 
specific code (this gave me a speedup of up to 80%).


It seems an instruction worth having in dmd too.



Chart: http://cloud.github.com/downloads/jerro/pfft/image.png


I know your code is relatively simple, so it's not meant to be 
the fastest on the ground, but in your nice graph _as reference 
point_ I'd like to see a line for the FTTW too. Such line is 
able to show us how close or how far all this is from an 
industry standard performance.
(And if possible I'd like to see two lines for the LDC2 
compiler too.)


Bye,
bearophile


"bench" program in the fftw test directory gives this when run in 
a loop:



2   Problem: 4, setup: 21.00 us, time: 11.16 ns, ``mflops'': 3583.7
3   Problem: 8, setup: 21.00 us, time: 22.84 ns, ``mflops'': 5254.3
4   Problem: 16, setup: 24.00 us, time: 46.83 ns, ``mflops'': 6833.9
5   Problem: 32, setup: 290.00 us, time: 56.71 ns, ``mflops'': 14108
6   Problem: 64, setup: 1.00 ms, time: 111.47 ns, ``mflops'': 17225
7   Problem: 128, setup: 2.06 ms, time: 227.22 ns, ``mflops'': 19717
8   Problem: 256, setup: 3.99 ms, time: 499.48 ns, ``mflops'': 20501
9   Problem: 512, setup: 7.11 ms, time: 1.10 us, ``mflops'': 20958
10	Problem: 1024, setup: 14.51 ms, time: 2.47 us, ``mflops'': 
20690
11	Problem: 2048, setup: 30.18 ms, time: 5.72 us, ``mflops'': 
19693
12	Problem: 4096, setup: 61.20 ms, time: 13.20 us, ``mflops'': 
18622
13	Problem: 8192, setup: 127.97 ms, time: 36.02 us, ``mflops'': 
14784
14	Problem: 16384, setup: 252.58 ms, time: 82.43 us, ``mflops'': 
13913
15	Problem: 32768, setup: 490.55 ms, time: 194.14 us, ``mflops'': 
12659
16	Problem: 65536, setup: 1.13 s, time: 422.50 us, ``mflops'': 
12409
17	Problem: 131072, setup: 2.67 s, time: 994.75 us, ``mflops'': 
11200
18	Problem: 262144, setup: 5.77 s, time: 2.28 ms, ``mflops'': 
10338
19	Problem: 524288, setup: 1.72 s, time: 9.50 ms, ``mflops'': 
5243.4
20	Problem: 1048576, setup: 5.51 s, time: 20.55 ms, ``mflops'': 
5102.8
21	Problem: 2097152, setup: 9.55 s, time: 42.88 ms, ``mflops'': 
5135.2
22	Problem: 4194304, setup: 26.51 s, time: 88.56 ms, ``mflops'': 
5209.8


This was with fftw compiled for single precision and with SSE, 
but without AVX support. When I compiled fftw with AVX support, 
the peak was at about 30 GFLOPS, IIRC. It is possible that it 
would be even faster if I configured it in a different way. The 
C++ version of my FFT also supports AVX and gets to about 24 
GFLOPS when using it. If AVX types will be added to D, I will 
port that part too.


Re: FFT in D (using SIMD) and benchmarks

2012-01-24 Thread a
Did you run into any issues with GDC's implementation of 
vectors?


No, GDC vectors worked fine. It was a pretty straight forward 
port from C++ version using intrinsics from emmintrin.h.


FFT in D (using SIMD) and benchmarks

2012-01-24 Thread a
Since SIMD types were added to D I've ported an FFT that I was 
writing in C++ to D. The code is here:


https://github.com/jerro/pfft

Because dmd currently doesn't have an intrinsic for the SHUFPS 
instruction I've included a version block with some GDC specific 
code (this gave me a speedup of up to 80%). I've benchmarked the 
scalar and SSE version of code compiled with both DMD and GDC and 
also the c++ code using SSE. The results are below. The left 
column is base two logarithm of the array size and the right 
column is GFLOPS defined as the number of floating point 
operations that the most basic FFT algorithm would perform 
divided by the time taken (the algorithm I used performs just a 
bit less operations):


GFLOPS = 5 n log2(n) / (time for one FFT in nanoseconds)   (I 
took that definition from http://www.fftw.org/speed/ )


Chart: http://cloud.github.com/downloads/jerro/pfft/image.png

Results:

GDC SSE:

2   0.833648
3   1.23383
4   6.92712
5   8.93348
6   10.9212
7   11.9306
8   12.5338
9   13.4025
10  13.5835
11  13.6992
12  13.493
13  12.7082
14  9.32621
15  9.15256
16  9.31431
17  8.38154
18  8.267
19  7.61852
20  7.14305
21  7.01786
22  6.58934

G++ SSE:

2   1.65933
3   1.96071
4   7.09683
5   9.66308
6   11.1498
7   11.9315
8   12.5712
9   13.4241
10  13.4907
11  13.6524
12  13.4215
13  12.6472
14  9.62755
15  9.24289
16  9.64412
17  8.88006
18  8.66819
19  8.28623
20  7.74581
21  7.6395
22  7.33506

GDC scalar:

2   0.808422
3   1.20835
4   2.66921
5   2.81166
6   2.99551
7   3.26423
8   3.61477
9   3.90741
10  4.04009
11  4.20405
12  4.21491
13  4.30896
14  3.79835
15  3.80497
16  3.94784
17  3.98417
18  3.58506
19  3.33992
20  3.42309
21  3.21923
22  3.25673

DMD SSE:

2   0.497946
3   0.773551
4   3.79912
5   3.78027
6   3.85155
7   4.06491
8   4.30895
9   4.53038
10  4.61006
11  4.82098
12  4.7455
13  4.85332
14  3.37768
15  3.44962
16  3.54049
17  3.40236
18  3.47339
19  3.40212
20  3.15997
21  3.32644
22  3.22767

DMD scalar:

2   0.478998
3   0.772341
4   1.6106
5   1.68516
6   1.7083
7   1.70625
8   1.68684
9   1.66931
10  1.66125
11  1.63756
12  1.61885
13  1.60459
14  1.402
15  1.39665
16  1.37894
17  1.36306
18  1.27189
19  1.21033
20  1.25719
21  1.21315
22  1.21606

SIMD gives between 2 and 3.5 speedup for GDC compiled code and 
between 2.5 and 3 for DMD. Code compiled with GDC is just a 
little bit slower than G++ (and just for some values of n), which 
is really nice.


Re: SIMD benchmark

2012-01-17 Thread a

On Wednesday, 18 January 2012 at 01:50:00 UTC, Timon Gehr wrote:

Anyway, I was after a general matrix*matrix multiplication, 
where the operands can get arbitrarily large and where any 
potential use of __restrict is rendered unnecessary by array 
vector ops.


Here you go. But I agree there are use cases for restrict where 
vector operations don't help


void matmul(A,B,C)(A a, B b, C c, size_t n, size_t m, size_t l)
{
   for(size_t i = 0; i < n; i++)
   {
   c[i*l..i*l + l] = 0;
   for(size_t j = 0; j < m; j++)
   c[i*l..i*l + l] += a[i*m + j] * b[j*l..j*l + l];
   }
}




Re: Discussion about D at a C++ forum

2012-01-11 Thread a
Alexander Malakhov Wrote:

> And even if that will happen, D1 page most likely will be deleted 
> later due to little visits count

They are actually deleting pages due to low visit counts? This is just wrong.


Re: SIMD support...

2012-01-08 Thread a

On Sunday, 8 January 2012 at 01:48:34 UTC, Manu wrote:
On 8 January 2012 03:44, Walter Bright 
 wrote:



On 1/7/2012 4:54 PM, Peter Alexander wrote:


I think it simply requires a lot of work in the compiler.



Not that much work. Most of it segues nicely into the previous 
work I did

supporting the XMM floating point code gen.



What is this previous work you speak of? Is there already XMM 
stuff in

there somewhere?


DMD (at least 64 bit on linux, I'm not sure about 32 bit) now 
uses XMM registers and instructions that work on them (addss, 
addsd, mulsd...) for scalar floating point operations.


Re: SIMD support...

2012-01-06 Thread a
Walter Bright Wrote:

> which provides two functions:
> 
> __v128 simdop(operator, __v128 op1);
> __v128 simdop(operator, __v128 op1, __v128 op2);

You would also need functions that take an immediate too to support 
instructions such as shufps. 

> One caveat is it is typeless; a __v128 could be used as 4 packed ints or 2 
> packed doubles. One problem with making it typed is it'll add 10 more types 
> to 
> the base compiler, instead of one. Maybe we should just bite the bullet and 
> do 
> the types:
> 
>  __vdouble2
>  __vfloat4
>  __vlong2
>  __vulong2
>  __vint4
>  __vuint4
>  __vshort8
>  __vushort8
>      __vbyte16
>  __vubyte16

I don't see it being typeless as a problem. The purpose of this is to expose 
hardware capabilities to D code and the vector registers are typeless, so why 
shouldn't vector type be "typeless" too? Types such as vfloat4 can be 
implemented in a library (which could also be made portable and have a nice 
API).


Re: SIMD support...

2012-01-06 Thread a
a Wrote:

> > Would this tie SIMD support directly to x86/x86_64, or would it
> > possible to also support NEON on ARM (also 128 bit SIMD, see
> > http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0409g/index.html
> > ) ?
> > (Obviously not for DMD, but if the syntax wasn't directly tied to
> > x86/64, GDC and LDC could support this)
> > It seems like using a standard naming convention instead of directly
> > referencing instructions could let the underlying SIMD instructions
> > vary across platforms, but I don't know enough about the technologies
> > to say whether NEON's capabilities match SSE closely enough that they
> > could be handled the same way.
> 
> For NEON you would need at least a function with a signature:
> 
> __v128 simdop(operator, __v128 op1, __v128 op2,  __v128 op3);
> 
> since many NEON instructions operate on three registers.  

Disregard that, I wasn't paying atention to the return type. What Walter 
proposed can already handle three operand NEON instructions. 


Re: SIMD support...

2012-01-06 Thread a
> Would this tie SIMD support directly to x86/x86_64, or would it
> possible to also support NEON on ARM (also 128 bit SIMD, see
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0409g/index.html
> ) ?
> (Obviously not for DMD, but if the syntax wasn't directly tied to
> x86/64, GDC and LDC could support this)
> It seems like using a standard naming convention instead of directly
> referencing instructions could let the underlying SIMD instructions
> vary across platforms, but I don't know enough about the technologies
> to say whether NEON's capabilities match SSE closely enough that they
> could be handled the same way.

For NEON you would need at least a function with a signature:

__v128 simdop(operator, __v128 op1, __v128 op2,  __v128 op3);

since many NEON instructions operate on three registers.  


Re: System programming in D (Was: The God Language)

2012-01-05 Thread a

> A language defined 128bit SIMD type would be fine for basically all
> architectures. Even though they support different operations on these
> registers, the size and allocation patterns are always the same across all
> architectures; 128 bits, 16byte aligned, etc. This allows at minimum
> platform independent expression of structures containing simd data, and
> calling of functions passing these types as args.

You forgot about AVX. It uses 256 bit registers and is supported in new Intel 
and AMD processors.



Re: string is rarely useful as a function argument

2012-01-01 Thread a
> Meh, I'd still prefer it be an array of UTF-8 code /points/ represented
> by an array of bytes (which are the UTF-8 code units).

By saying you want an array of code points you already define
representation. And if you want that there already is dchar[]. You probably
meant a range of code points represented by an array of code units. But
such a range can't have opIndex, since opIndex implies a constant time
operation.  If you want nth element of the range, you can use std.range.drop
or write your own nth() function.  


Re: System programming in D (Was: The God Language)

2011-12-29 Thread a
Walter Bright Wrote:

> On 12/29/2011 5:13 AM, a wrote:
> > The needles loads and stores would make it impossible to write an efficient
> > simd add function even if the functions containing asm blocks could be
> > inlined.
> 
> This does what you're asking for:
> 
> void test(ref float a, ref float b)
> {
>  asm
>  {
>  naked;
>  movaps  XMM0,[RSI];
>  addps   XMM0,[RDI];
>  movaps  [RSI],XMM0;
>  movaps  XMM0,[RSI];
>  addps   XMM0,[RDI];
>  movaps  [RSI],XMM0;
>  ret;
>  }
> }

What I want is to be able to write short functions using inline assembly and 
have them inlined and compiled even to a single instruction where possible. 
This can be done with gcc. See my post here: 
http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=153879


Re: System programming in D (Was: The God Language)

2011-12-29 Thread a
David Nadlinger Wrote:

> On 12/29/11 2:13 PM, a wrote:
> > void test(ref V a, ref V b)
> > {
> >  asm
> >  {
> >  movaps XMM0, a;
> >      addps  XMM0, b;
> >  movaps a, XMM0;
> >  }
> >  asm
> >  {
> >      movaps XMM0, a;
> >  addps  XMM0, b;
> >  movaps a, XMM0;
> >  }
> > }
> >
> > […]
> >
> > The needles loads and stores would make it impossible to write an efficient 
> > simd add function even if the functions containing asm blocks could be 
> > inlined.
> 
> Yes, this is indeed a problem, and as far as I'm aware, usually solved 
> in the gamedev world by using the (SSE) intrinsics your favorite C++ 
> compiler provides, instead of resorting to inline asm.
> 
> David

IIRC Walter doesn't want to add vector intrinsics, so it would be nice if the 
functions to do vector operations could be efficiently  written using inline 
assembly.  It would also be a more general solution than having intrinsics. 
Something like that is possible with gcc extended inline assembly. For example 
this: 

typedef float v4sf __attribute__((vector_size(16)));

void vadd(v4sf *a, v4sf *b)
{
asm(
"addps %1, %0" 
: "=x" (*a) 
: "x" (*b), "0" (*a)
: );
}

void test(float * __restrict__ a, float * __restrict__ b)
{
v4sf * va = (v4sf*) a;
v4sf * vb = (v4sf*) b;
vadd(va,vb);
vadd(va,vb);
vadd(va,vb);
vadd(va,vb);
}

compiles to:

004004c0 :
  4004c0:   0f 28 0emovaps (%rsi),%xmm1
  4004c3:   0f 28 07movaps (%rdi),%xmm0
  4004c6:   0f 58 c1addps  %xmm1,%xmm0
  4004c9:   0f 58 c1addps  %xmm1,%xmm0
  4004cc:   0f 58 c1addps  %xmm1,%xmm0
  4004cf:   0f 58 c1addps  %xmm1,%xmm0
  4004d2:   0f 29 07movaps %xmm0,(%rdi)

This should also be possible with GDC, but I couldn't figure out how to get 
something like __restrict__ (if you want to use vector types and gcc extended 
inline assembly with GDC, see 
http://www.digitalmars.com/d/archives/D/gnu/Support_for_gcc_vector_attributes_SIMD_builtins_3778.html
 and https://bitbucket.org/goshawk/gdc/wiki/UserDocumentation).


  1   2   >