On Sun, 18 Mar 2012 04:03:29 -0400, Walter Bright
<newshou...@digitalmars.com> wrote:
On 3/17/2012 10:01 PM, F i L wrote:
Walter Bright wrote:
My impression is it is just obfuscation around a simple lazy
initialization
pattern.
While I can see the abstraction usefulness of compile time attribute
metadata,
I am having a hard time seeing what the gain is with runtime
attributes over
more traditional techniques.
I'm not sure exactly what you mean by runtime attributes. Do you mean
the
ability to reflect upon attributes at runtime?
I mean there is modifiable-at-runtime, instance-specific data.
I don't think we should go this far with attributes. If you want to store
instance-specific data, we already have a place for that. I haven't been
exactly groking all of this thread, but I understand what attributes are
useful for having used them quite a bit in C#.
Again, I am just not seeing the leap in power with this. It's a mystery
to me how a user defined "attribute class" is different in any way from
a user defined class, and what advantage special syntax gives a standard
pattern for lazy initialization, and why extra attribute fields can't be
just, well, fields.
The advantage is it hooks you into the compiler's generation of metadata
and runtime type information (i.e. TypeInfo). It basically allows you to
describe your type in ways that the compiler doesn't natively support.
It's purely a type-description thing, I think it has nothing to do with
instances.
The serializable example is probably most used because it's one of the
most commonly used attributes in C#/Java. Given that neither has
templates, the impact is somewhat greater than in D, but I believe Jacob
has quite a bit of experience with serialization of types given that he
wrote a serializer for D, so he would be the one I'd ask as to why it's
better. I think another great example is data you can pass to the GC (if
we ever get precise scanning).
In a previous life, while using C#, I created a "scripting" system that
used objects to define what methods were run during certain test processes
(the application managed the automated testing of servers remotely). The
entire script was constructed based on an xml file, and the library that
edited/built the script had no access or knowledge of the types it was
constructing, it used pure reflection to do everything. The cool thing
(in my mind at least) was, I had a script builder application which
"opened" the executable to find all the scripting objects. It did not
need to be recompiled, nor did it directly link with a library having the
script objects, because it used reflection to determine everything I
needed. Some specific attributes weren't available in reflection, so I
just added some user-defined attributes, and in that way, I was able to
affect the metadata to pass information to the script building application.
I think also you can make code more readable and easier to write with
attributes, because you can specify/convey information rather than
interpreting or making assumptions, or inventing types that have nothing
to do with the way the type works.
For example, you could do something like:
struct noSerInt
{
int _val;
alias this _val;
}
struct X
{
noSerInt x; // clue to not serialize x
}
This is extra complication which is not needed while using an instance of
X *except* when serializing it. I think it's cleaner to put this
information into the metadata instead of having to re-implement int.
Compare to:
struct X
{
@noSer int x;
}
Where you can safely ignore the @noSer attribute for most intents and
purposes.
I think that the sole purpose of attributes is compile-time generation of
compile and run-time query-able data. Nothing else really makes sense to
me.
-Steve