Re: bigint - python long
On 2012-09-06 04:10, bearophile wrote: There are several important cases, like: Some D lazy ranges == Python lazy iterators/generators array.array == D arrays NumPy arrays == D arrays Associative arrays? -- /Jacob Carlborg
Re: bigint - python long
Sorry, I missed earlier bits of this thread… On Wed, 2012-09-05 at 19:37 -0700, Ellery Newcomer wrote: On 09/05/2012 07:10 PM, bearophile wrote: Some D lazy ranges == Python lazy iterators/generators I'll look into this one. array.array == D arrays just checked, looks like we have it: PyStmts(q{from array import array; a = array('i', [44,33,22,11]);}, testing); assert(PyEval!(int[])(a, testing) == [44,33,22,11]); I think if the python object is iterable, it can be converted to array. I am guessing this is interfacing to CPython, remember there is also PyPy and ActiveState, they have different ways of doing things. Well the ActiveState C API will be very close to the CPython C API, but PyPy (which is the best Python 2.7 just now) doesn't have a C API since it is written in RPython. Matrices might pose a problem. But user can define custom conversions if need be. NumPy arrays == D arrays I have never used NumPy. Are its arrays special? Oh yes. NumPy is basically a subsystem that Python applications make calls into. Although the data structure can be accessed and amended, algorithms on the data structures should never be written in Python, they should always be function calls into the NumPy framework. Also, which of the following looks more appealing to you? I don't know. ok, I'll be lazy then. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: bigint - python long
On 05/09/12 21:23, Paul D. Anderson wrote: On Wednesday, 5 September 2012 at 18:13:40 UTC, Ellery Newcomer wrote: Hey. Investigating the possibility of providing this conversion in pyd. Python provides an api for accessing the underlying bytes. std.bigint seemingly doesn't. Am I missing anything? No, I don't believe so. AFAIK there is no public access to the underlying array, but I think it is a good idea. I suspect the reason for not disclosing the details is to disallow anyone putting the data into an invalid state. But read-only access would be safe. No, it's just not disclosed because I didn't know the best way to do it. I didn't want to put something in unless I was sure it was correct. (And a key part of that, is what is required to implement BigFloat).
Re: How to have strongly typed numerical values?
Thank you everyone for your replies. Initially I think I am going to role my own as my requirements are fairly simple compared to a full blown system with units of measure. My goal is to have large number of application specific types which may be implemented as the same but are semantically different. A real world case in which I have used this approach was when cleaning up a large body of code using a mixture of time units (sec,msec,µsec) stored as 32 and 64bit signed and unsigned integers. We needed to standardized on µsec and explicit typing greatly simplified the migration process. I have also worked on protects in the past where we speculated on the benefits of using strong typing to avoid incorrect mixing of matrices (world, model, bone etc.). I think there is allot of value to be had in simply requiring explicit casting however I am less convinced on systems that allow implicit combining of units of the same quantity type. I feel part of the type is its range and precision and so there is no valid way to implicitly add kilometers to millimeters for example. I hope that at some point support for the semantically different versions of the same type will be added to the standard library but for now I am happy to write my own. D is still a new language to me and it has some vary nice features to help accomplish this.
Re: bigint - python long
Ellery Newcomer: array.array == D arrays just checked, looks like we have it: PyStmts(q{from array import array; a = array('i', [44,33,22,11]);}, testing); assert(PyEval!(int[])(a, testing) == [44,33,22,11]); I think if the python object is iterable, it can be converted to array. array.array are special, they aren't Python lists. array.array contains uniform data, so conversion to D arrays is a memcpy (or it's nearly nothing if you don't copy the data). Bye, bearophile
Re: How to have strongly typed numerical values?
Nicholas Londey: however I am less convinced on systems that allow implicit combining of units of the same quantity type. I feel part of the type is its range and precision and so there is no valid way to implicitly add kilometers to millimeters for example. I see. If the range and precision are statically know then it's possible to design types that contain such values too. If they are known at run-time they need some run-time tests. Bye, bearophile
Re: How to have strongly typed numerical values?
On Wednesday, 5 September 2012 at 05:53:49 UTC, anonymous wrote: I noticed two flaws in std.units: 1) Can't add quantities of the same type (this is probably trivial to fix). 2) Different scopes don't make different quantity types. Can you elaborate on that? I must admit that I didn't actively work on std.units for quite some while now, as general interest in it seemed to have faded (I'm glad to be proven wrong, though), but adding quantities of the same type should definitely work. And what do you mean by different scopes? If the unit types are different, the quantity types should be different as well. David
Re: How to have strongly typed numerical values?
On Thursday, 6 September 2012 at 12:22:08 UTC, David Nadlinger wrote: Can you elaborate on that? I must admit that I didn't actively work on std.units for quite some while now, as general interest in it seemed to have faded (I'm glad to be proven wrong, though), but adding quantities of the same type should definitely work. Maybe I'm missing something fundamental, but this little test fails: --- auto foo = baseUnit!foo; auto foo2 = foo + foo; --- Error: incompatible types for ((foo) + (foo)): 'BaseUnit!(foo,null)' and 'BaseUnit!(foo,null)' And what do you mean by different scopes? If the unit types are different, the quantity types should be different as well. In the following, S1.foo and S2.foo are of the same type. I think they shouldn't be; just like S1.Bar and S2.Bar are different types. --- struct S1 {enum foo = baseUnit!foo; struct Bar {}} struct S2 {enum foo = baseUnit!foo; struct Bar {}} ---
Re: bigint - python long
On 09/05/2012 11:19 PM, Jacob Carlborg wrote: On 2012-09-06 04:10, bearophile wrote: There are several important cases, like: Some D lazy ranges == Python lazy iterators/generators array.array == D arrays NumPy arrays == D arrays Associative arrays? check. https://bitbucket.org/ariovistus/pyd/wiki/TypeConversion
Re: bigint - python long
On 09/06/2012 04:11 AM, bearophile wrote: Ellery Newcomer: array.array == D arrays just checked, looks like we have it: PyStmts(q{from array import array; a = array('i', [44,33,22,11]);}, testing); assert(PyEval!(int[])(a, testing) == [44,33,22,11]); I think if the python object is iterable, it can be converted to array. array.array are special, they aren't Python lists. array.array contains uniform data, so conversion to D arrays is a memcpy (or it's nearly nothing if you don't copy the data). Bye, bearophile I see. The docs for array.array suggest that it implements the buffer interface, but it doesn't seem to implement new or old style buffers, at least according to PyObject_CheckBuffer and PyBuffer_Check. I think I'll add support for new style buffers anyways. a memoryview would be good, too. Guess I'll hack together a special case for array using buffer_info.
Re: Use .get() in MultiD Assoc Array?
On Saturday, 1 September 2012 at 05:23:35 UTC, Ali Çehreli wrote: On 08/31/2012 11:55 AM, Ali Çehreli wrote: class MyTable [...] // Enables the 'auto element = myIndex in myTable' syntax That's wrong. For that syntax to work, the operator below should have been opBinaryRight. string * opBinary(string op)(Index index) Yeah, that should have been opBinaryRight. (And the badly designed Thunderbird removes the indentation in quoted text. Smart application or stupid designer?) // Enables 'auto value = myTable[myIndex]' ref string opIndex(Index index) { string * result = this.opBinary!in(index); If I had defined opBinaryRight as I should have, then I could simply use the 'in' operator on the right-hand side: string * result = index in this; Ali Ali too Ali I am grateful for your help but its over my head...at least at present. Thanks.
Re: RTP/RTCP in D?
On 05-Sep-12 18:33, M.G.Meier wrote: Hi all, is there a convenient way (bindings?) to use RTP/RTCP protocols from within a D project? I'm developing a client/server application where only tiny bursts of data (i.e. messages) have to be exchanged between srv and clt, and TCP sockets don't do the trick ;_; AFIAK RTP doesn't handle messages at all. It's about getting real-time streams with proper QoS. So I'd say it's plain unusable for short burst messaging. Or is there a better, but whole different approach to this than the RTP protocol family? I'd look at plain UDP datagrams it's as fast as it gets in sending messages. The advnatage is - you'd get only the whole message (no pieces) and no overhead compared to TCP (connection state, buffering etc.). But it's very simple and doesn't check for lost packets (messages). If you need reliability there is an RUDP aka reliable datagram protocol though I don't think it's supported on all OSes. Thx 4 answering! no problem ;) -- Dmitry Olshansky