going to dig up an old thread, I would still love to see
from module import type foo
Run
I'm a little disconcerted, that I cannot get your point... Exactly, why "_is
clearly undefined_" or, maybe, what is meant by that. E.g.:
type
T1 = distinct int
T2 = distinct int
M = distinct int
proc `*`(x, y: M): M {.borrow.}
var A = 3.T1
var B = 5.T2
Hum, and what if `'M` represented a scope/namespace instead? One keeps the
exact same basic type (`NDArray`) but by appending `'M`, that namespace is
looked for first...
This would be redundant with full names, but it would work with operators...
> then what is type(A:T1'M*B:T2'M)?
>
> Regarding `A:T1'M * B:T2'M`, `*(x: M, y:M): M` is clearly undefined and
> explicit casting would be necessary anyway.
Well, just as I said, if the return type is undefined, then the final type is
undefined and explicit casting would be necessary. (I am
**@lltp**: I've meant, if `type(A'M*A'M) == type(A)`, then what is
`type(A:T1'M*B:T2'M)`?
**@LeuGim**: in my mind, `'M` would really have been a "view" of the underlying
data. If `A` or `B` cannot be represented as `M` (i.e. for now, are not a
distinct type of it), then it should produce a compile error. Maybe a
"Ambiguous view" just like we have ambiguous function definitions?
**@andrea**: More seriously, I see your point (I was not aware that these type
conversion were really free, it's still leaky but better than nothing).
I am going to focus on writing an actual library now, see how it turns out and
what the pain points are. Then we will see how things can be
@lltp, regarding `'M`:
That makes sense for that example, but what's for `A'M * B'M` (or `A'M * B'N`),
where `A` and `B` are of different types?
@LeuGim, well yes and no... AFAIK `cast[M](A)` would be of type `M` which is
problematic due to type leaking:
(A'M * A'M) * A
# shouldn't be translated like this:
cast[M](A) * cast[M](A) * A
# but like this instead:
cast[NDArray](cast[M](A) * cast[M](A)) * A
With current Nim (no changes needed) it would look like
echo (A + NDArray(M(A) * M(A)))
No runtime conversion overhead, since the types are distinct but in memory they
are the same
@lltp this is exactly what I said
`A'M` just meaning `cast[M](A)`?
No, it is I who are sorry for not being clear enough.
> So you want the from foo import type T feature to get the namespacing
> benefits while at the same time you argue distinct types do not work well in
> your problem domain.
My use case is basically using distinct types with one or more
**@Araq**: What I meant is that with proper namespace support (that is already
mainly here anyway) we are not constrained to follow Matlab's lead as it's been
the case in most scientific libraries (including numpy/scipy) these past few
years. Matlab's way is to provide vector and matrix
I observed your current issues actually to find which functions in which
module/file and, by using namespaces you could immediately know where to look,
is this correct? If so, in the end it would be down to lib author whether
making namespace or not. imho, it shouldn't Nim's problem.
> With proper namespace support, this is not really a problem: I can still
> calculate the element-wise exponential on a (multidimensional) array for
> instance without having to prepend anything while also being able to
> calculate a matrix exponential using full names...
I don't understand
@andrea: in addition to what @mmierzwa said (which is the main reason for all
of this), there are also the problems of convenience and generality.
In short, I don't want to cast an array[int] to a seq[int] each time I want
element-wise operation because such casts would occur at nearly each
Duuude! "Namespace pollution" is, like, a bummer for like them like twentieth
century square types in neckties and polished shoes, whose boss like reads code
in Notepad and stuff. We got some way new far out ideas here, bro, like major
consciousness expanding nimXperience! Embrace the anarchy,
Andrea, whilst its true, it is on library creator side, whilst we may want to
decide how it would look like in our code, when there are two libraries where
their creators did not use distinct type.
(by the way, I think that the right way to handle different products is to
actually use the symbols that are in common use and allow to write
let x = v ⊗ w
maybe as an alias for
let x = v.tensor(w)
for the case when one does not want to copy paste the
Just one quick comment. Quoting the OP
> Well, then it is useless. I am from a domain (science) where similar concepts
> are generally called a similar way and where clashes occur a lot because
> something as simple as * has a gazillion meanings depending on the context
> (and all of them
Ok, but no ETAs. Also: We need to decide if return types are considered too.
from json import type JsonNode
let x = parseJson()
Valid? Invalid? Why?
So, I experimented quite a lot...
**@Araq**: you were completely right: a from Foo import type Bar would actually
be awesome! Can we have that?
@Jehan, yglukhov: I wonder then how clashes would be handled then with a
template-based approach:
import Foo, Bar:
...
baz(x)
with baz defined in both Foo and Bar...
@Krux02: I may be the only one to think this, but "openFile" seems just as good
(from a syntax perspective) as "File.open" while the second form is far more
flexible, offers more guarantees and is more easily discoverable and fixable
should a problem occur...
As far as I am concerned, I don't
D supports local imports and has to deal with overloading. Nim supports the
following:
from foo import nil
block: # Start of scope
template someSymbolFromFoo(a: int): int = foo.someSymbolFromFoo(a)
type SomeTypeFromFoo = foo.SomeTypeFromFoo
# Use
Local imports is something I would also vote for. I bet they should be easy to
implement. @Araq, what do you think?
**yglukhov:** _I bet they should be easy to implement._
Well, first there's also the problem of semantics.
OCaml, SML, and Python as three languages that support them don't have to worry
about overloading. Definitions shadow each other, so that the innermost import
always wins. With
> Well, then it is useless. I am from a domain (science) where similar concepts
> are generally called a similar way and where clashes occur a lot because
> something as simple as * has a gazillion meanings depending on the context
> (and all of them applied to arrays of any dimensions, so no
@Jehan: I see, I did not understand correctly but it is clear now. Its akin to
scoped imports then and it seems far better for scientific computing
(especially for operators).
**lltp:** _@Jehan, could you elaborate a little bit on how ML-type module
management would help for scientific computing because I am still not sure that
I have grasped everything._
The problem that local imports solve is to provide a midpoint between the
extremes of putting identifiers in the
> modules in Nim provide no intrinsic guarantees about that splitting
There are not many languages that provide such guarantees. C guarantees
nothing. C++ guarantees a member to be defined in the class definition or its
superclasses, which in turn are not guaranteed to reside in any
Hi! (please bear with the length, I promise this is my last long post)
* * *
@Araq, I quite agree with @Jehan's view that eventually, a reasonable developer
would try to provide structural guarantees in his code and in that sense, your
suggestion (which make sense in general) would seem weird
Well, sorry for earlier: it is just that if encapsulation is seen as "fighting
the language", then this is clearly not a good fit for me. And that happened in
one of the worst times for me: I am basically adapting a multigrid solver
algorithm to two code bases, one in Python, the other in
> A better solution would be ML-style module management, especially local
> imports
I've asked for local imports a few times but from what I've read Araq is
opposed to the idea, citing issues in D's version of this language feature.
I'll go out on a limb and assume he means
> Should be less work for the compiler, and any set of symbols can be combined,
> not just one type; so a module can be logically subdivided in any possibly
> overlapping parts, not affecting its inner structure anyhow.
Nice idea but who wants to write these export groupings? I don't. And much
> from Module import type Foo
Then maybe more explicit, something like:
# module A
type
Person* = object
user*: string
age*: int
proc `$`*(p: Person): string = p.user
export (Person, `$`*(p: Person): string) as C
#
> Oh, and remove my account please.
That really sucks to see. I agree with your reaction though, if I saw
**@Araq's** reply I would be discouraged as well. **@Araq's** demonstration is
fair, but he should have mentioned his willingness to improve the situation
right there in his initial post,
> Oh, and remove my account please.
I will do so if you ask again, but as peace-offering how about this proposal:
from Module import type Foo
# imports Foo and every operation that acts on this type (or returns it?)
> I am not asking to make strong namespacing the default, just to improve their
> support to help people do their job in the best way possible (again, without
> hindering existing people). But just addressing shortcomings to make the
> language better without touching to what already exists
Oh, and remove my account please.
Well, then it is useless. I am from a domain (science) where similar concepts
are generally called a similar way and where clashes occur a lot because
something as simple as * has a gazillion meanings depending on the context (and
all of them applied to arrays of any dimensions, so no useful
# module A
type
Person* = object
user*: string
age*: int
proc `$`*(p: Person): string = p.user
# module B
from A import Person
var x = Person(user: "Gustav Gans", age: 45)
# oops, bug here, calls system.$ not the $
As I am rediscovering namespaces as context indicators, I naturally tried to do
see the limits of the current system to see what I can do and what I can't.
# FooBar.nim
proc f(x: int): int =
return x + 2
# main.nim (or whatever)
from FooBar as baz
44 matches
Mail list logo