Re: [sage-combinat-devel] Dear Poset Users, tell me what you like
Would you mind telling me what huge means ? It does make a difference when one writes the code. Probably with hundreds of vertices. That's huge ? Okay I see. I'm glad I asked. So it's not larger than 65536 :-P This was only for you to see what the graphs look like :-) Well, ... sage: P = Poset(([1,2,3],[[1,2]])) sage: L = P.linear_extensions() sage: G = L.markov_chain_digraph() sage: view(G) ... NotImplementedError: it is not possible create a tkz-graph version of a graph with multiple edges By the way it would be cool if somebody could review #14953 as it could be useful to you too. As well as #14589 because that's what I think should be used to store the transitive closure of your posets and make the comparisons faster. Nathann -- You received this message because you are subscribed to the Google Groups sage-combinat-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-combinat-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-combinat-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-combinat-devel. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sage-combinat-devel] Dear Poset Users, tell me what you like
Y !! I certainly can recall dealing with lots and lots of small posets (typical for algebraic combinatorics: your combinatorial objects are small, but you are often considering all of them at once because you are talking formal linear combinations of them and likewise). I also have dealt with medium-size posets (30 to 40 vertices), but in those cases I don't think the comparisons were much of a bottleneck. I don't recall ever dealing with huge posets (as in thousands of vertices), but I can imagine myself doing that every once in a while. On another note: I remember the __init__ of Poset (well, FinitePoset) being way slower than it reasonably should be. I think this is related to it ducktyping the input (which IMHO is a bad thing anyway but seems standard in Sage). It would be nice to have quicker ways to initialize a poset from an already sanitized/predigested input datastructure. This all said, I'd very much like to see things not getting slower in the less-used regime while the more popular regime gets optimized. I see. Thank you very much for this informative answer ! This duck typing I think is standard in Sage-combinat, but not in the other parts of Sage that I know (which may be very few, admittedly). I also think that this is where most of your ressources go when you use Posets, and that there should be a way to disable it. For some operations in graphs most of the time is spent dealing with labels, especially when they are complicated and hard-to-hash things. Let me withdraw what I said above about code getting slower at first : it turns out that the code I thought would have to be re-written for the new immutable backend already works fine with this backend, so using this new backend should only make things faster. Rewriting this code for this specific backend should be even better. Right now, what I wonder is the following : #14589, when it will be reviewed (please, help me !), can be used to store efficiently in memory a dense digraph, and in this context the transitive closure of your Poset. Which means that comparing two elements will just amount to check that a bit is set to 1, which is *FAST*. Most of the time should be spent translating the vertex's label to an integer, but well, it should be fast. The question is : when should this transitive closure be computed ? Should it be computed when when the first comparison is requested ? I guess that this would make this ._transitive_closure a lazy attribute of Posets. Or should it be computed when the user explicitly asks for it ? It would mean that a keyword could be added to the Poset constructor, somehow asking the user which data structure should be used to store it : only hasse diagram, or the transitive closure too ? I wonder. It may be better to compute it immediately when a comparison is tried, but that would mean a huge slowdown if you build a lot of posets, compare two elements and throw them away. It should be much faster if you compare a lot of things on the same poset. What do you think ? Have fuunnn !! Nathann -- You received this message because you are subscribed to the Google Groups sage-combinat-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-combinat-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-combinat-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-combinat-devel. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sage-combinat-devel] Dear Poset Users, tell me what you like
Hmmm... Looks like there is already something like that in .hasse_diagram, in the _leq_matrix() method : def _leq_matrix(self): ... # Redefine self.is_lequal self.is_lequal = self._alternate_is_lequal ... Though this Matrix is defined to be a sparse matrix defined on ZZ. Don't know how much faster comparisons can be with a real binary matrix. Right now, it looks like it is created when the users asks for the matrix, and when he tests whether to elements are comparable/incomparable. So it's not sure somebody who would like to compute many comparisons would end up creating this matrix. Hmmm... Wonder what should be done there... O_o Nathann On 31 December 2013 11:29, Nathann Cohen nathann.co...@gmail.com wrote: Y !! I certainly can recall dealing with lots and lots of small posets (typical for algebraic combinatorics: your combinatorial objects are small, but you are often considering all of them at once because you are talking formal linear combinations of them and likewise). I also have dealt with medium-size posets (30 to 40 vertices), but in those cases I don't think the comparisons were much of a bottleneck. I don't recall ever dealing with huge posets (as in thousands of vertices), but I can imagine myself doing that every once in a while. On another note: I remember the __init__ of Poset (well, FinitePoset) being way slower than it reasonably should be. I think this is related to it ducktyping the input (which IMHO is a bad thing anyway but seems standard in Sage). It would be nice to have quicker ways to initialize a poset from an already sanitized/predigested input datastructure. This all said, I'd very much like to see things not getting slower in the less-used regime while the more popular regime gets optimized. I see. Thank you very much for this informative answer ! This duck typing I think is standard in Sage-combinat, but not in the other parts of Sage that I know (which may be very few, admittedly). I also think that this is where most of your ressources go when you use Posets, and that there should be a way to disable it. For some operations in graphs most of the time is spent dealing with labels, especially when they are complicated and hard-to-hash things. Let me withdraw what I said above about code getting slower at first : it turns out that the code I thought would have to be re-written for the new immutable backend already works fine with this backend, so using this new backend should only make things faster. Rewriting this code for this specific backend should be even better. Right now, what I wonder is the following : #14589, when it will be reviewed (please, help me !), can be used to store efficiently in memory a dense digraph, and in this context the transitive closure of your Poset. Which means that comparing two elements will just amount to check that a bit is set to 1, which is *FAST*. Most of the time should be spent translating the vertex's label to an integer, but well, it should be fast. The question is : when should this transitive closure be computed ? Should it be computed when when the first comparison is requested ? I guess that this would make this ._transitive_closure a lazy attribute of Posets. Or should it be computed when the user explicitly asks for it ? It would mean that a keyword could be added to the Poset constructor, somehow asking the user which data structure should be used to store it : only hasse diagram, or the transitive closure too ? I wonder. It may be better to compute it immediately when a comparison is tried, but that would mean a huge slowdown if you build a lot of posets, compare two elements and throw them away. It should be much faster if you compare a lot of things on the same poset. What do you think ? Have fuunnn !! Nathann -- You received this message because you are subscribed to the Google Groups sage-combinat-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-combinat-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-combinat-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-combinat-devel. For more options, visit https://groups.google.com/groups/opt_out.
[sage-devel] Conversions without mathematical sense between polynomial rings
Dear all, Currently one can obtain surprising results in Sage when converting polynomial over finite fields (or elements of quotient rings of univariate polynomial ring even though tht's not the primary concern of the ticket). See http://trac.sagemath.org/ticket/11239 Basically the generators of the finite fields are just exchanged, even when the characteristics are different. The changes suggested in this ticket would more or less only let a coercion be used if there is one, and rant if the characteristics are different or the base finite fields are not part of a common lattice when there is no canonical embedding. Note though that the original disturbing conversion fits with the current behavior described in the doc: * http://www.sagemath.org/doc/reference/coercion/index.html#maps-between-parents where it's stated that Conversions need not be canonical (they may for example involve a choice of lift) or even make sense mathematically (e.g. constructions of some kind). Anyone has a strong opinion about what we should let Sage do in such a situation? Should we leave the old conversion when there is no coercion even though that might easily to wrong mathematical results for a careless user? Or never look for a coercion unless the user explicitly asks for it (i.e. what the situation is currently in Sage without the patches of the ticket)? Best, JP -- You received this message because you are subscribed to the Google Groups sage-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/groups/opt_out.
[sage-devel] Re: Conversions without mathematical sense between polynomial rings
Jean-Pierre Flori wrote: Currently one can obtain surprising results in Sage when converting polynomial over finite fields (or elements of quotient rings of univariate polynomial ring even though tht's not the primary concern of the ticket). See http://trac.sagemath.org/ticket/11239 Basically the generators of the finite fields are just exchanged, even when the characteristics are different. The changes suggested in this ticket would more or less only let a coercion be used if there is one, and rant if the characteristics are different or the base finite fields are not part of a common lattice when there is no canonical embedding. Note though that the original disturbing conversion fits with the current behavior described in the doc: * http://www.sagemath.org/doc/reference/coercion/index.html#maps-between-parents where it's stated that Conversions need not be canonical (they may for example involve a choice of lift) or even make sense mathematically (e.g. constructions of some kind). Anyone has a strong opinion about what we should let Sage do in such a situation? I posted a comment at #11239, so let me just say here that I think this principle (that conversions need not be canonical) shouldn't be pushed further than reasonable. For example, if K is a field and f, g in K[x] are polynomials, then applying a conversion K[x]/(f) - K[x]/(g) is a sensible thing to do if g divides f (the canonical map) or if f divides g (the lifting map). However, I don't see why it is important to insist that the composition of two such maps (e.g. K[x]/(f) - K[x] - K[x]/(g) when f, g have nothing to do with each other) should again be a valid conversion that can be invoked with the same ease. Hence I think that R(h), for R = K[x]/(g) and h in L[x]/(f), should only be allowed when K has a canonical map to L and either f divides g (in L[x]) or vice versa. This should cover almost all practical uses, and in the rare remaining cases it seems better to require being a bit more explicit and doing the conversion in two steps. Should we leave the old conversion when there is no coercion even though that might easily to wrong mathematical results for a careless user? Or never look for a coercion unless the user explicitly asks for it (i.e. what the situation is currently in Sage without the patches of the ticket)? I thought that conversion should always give the same result as a coercion map whenever one exists; is there any (intended) situation where this is not the case? Peter -- You received this message because you are subscribed to the Google Groups sage-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/groups/opt_out.
[sage-devel] Re: Conversions without mathematical sense between polynomial rings
Hi Peter, On 2013-12-31, Peter Bruin pjbr...@gmail.com wrote: I posted a comment at #11239, Then perhaps I should answer there, but anyway, here it goes. Warning: I am partially playing advocatus diavoli here. so let me just say here that I think this principle (that conversions need not be canonical) shouldn't be pushed further than reasonable. Yes, but for a rather relaxed notion of reasonable. For example, you couldn't even say that you have a conversion from K[x]/(f) to K[x]/(g), since a conversion is not a map. It is a *partial* map, hence, it often is the case that only polynomials of degree zero can be converted and higher degree polynomials result in an error. Think of QQ[x]-ZZ: sage: a = QQ['x'](1) sage: ZZ(a) 1 but sage: a = QQ['x'](1/2) sage: ZZ(a) boom For example, if K is a field and f, g in K[x] are polynomials, then applying a conversion K[x]/(f) - K[x]/(g) is a sensible thing to do if g divides f (the canonical map) or if f divides g (the lifting map). However, I don't see why it is important to insist that the composition of two such maps (e.g. K[x]/(f) - K[x] - K[x]/(g) when f, g have nothing to do with each other) should again be a valid conversion that can be invoked with the same ease. Hang on. Nobody says that the composition of two conversions A-B and B-C yields a conversion A-C (even if we forget for the moment that conversions are not more than partially defined maps)! Composibility is an axiom for *coercions*, but not for *conversions*. Actially conversion have no axioms worth mentioning. Hence, it would be perfectly fine to have K[x]/(f)-K[x] and K[x]-K[x]/(g) for *all* elements of the involved rings, but no conversion from K[x]/(f)-K[x]/(g), or actually: Only conversion for polynomials of degree zero. That said, do you really think one should do more or less expensive operations (such as: compute gcd or a prime factorisation) to verify that an easy conversion rule makes sense in a particular case? You say it should not be pushed further than reasonable. Is it reasonable to have an expensive test to decide whether or not to apply an easy conversion rule? Recall: Coercions should obey axioms and thus expensive tests may be needed. But why should one have expensive tests for something that has no defined properties? How *could* you possibly test something that has no defined properties? Hence I think that R(h), for R = K[x]/(g) and h in L[x]/(f), should only be allowed when K has a canonical map to L and either f divides g (in L[x]) or vice versa. Do you claim we should drop conversion entirely? After all, we also have a *coercion* system, and coercions (as opposed to conversions) are supposed to be mathematically consistent. Do you think this would be reasonable? Should we leave the old conversion when there is no coercion even though that might easily to wrong mathematical results for a careless user? Or never look for a coercion unless the user explicitly asks for it (i.e. what the situation is currently in Sage without the patches of the ticket)? I thought that conversion should always give the same result as a coercion map whenever one exists; is there any (intended) situation where this is not the case? If there is a coercion map then conversion must give the same result. Everything else would be a bug. Best regards, Simon -- You received this message because you are subscribed to the Google Groups sage-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/groups/opt_out.
[sage-devel] Re: Conversions without mathematical sense between polynomial rings
Hi Simon, Warning: I am partially playing advocatus diavoli here. That can be very useful! so let me just say here that I think this principle (that conversions need not be canonical) shouldn't be pushed further than reasonable. Yes, but for a rather relaxed notion of reasonable. For example, you couldn't even say that you have a conversion from K[x]/(f) to K[x]/(g), since a conversion is not a map. It is a *partial* map, hence, it often is the case that only polynomials of degree zero can be converted and higher degree polynomials result in an error. Sure, that still counts as reasonable for me. For example, if K is a field and f, g in K[x] are polynomials, then applying a conversion K[x]/(f) - K[x]/(g) is a sensible thing to do if g divides f (the canonical map) or if f divides g (the lifting map). However, I don't see why it is important to insist that the composition of two such maps (e.g. K[x]/(f) - K[x] - K[x]/(g) when f, g have nothing to do with each other) should again be a valid conversion that can be invoked with the same ease. Hang on. Nobody says that the composition of two conversions A-B and B-C yields a conversion A-C (even if we forget for the moment that conversions are not more than partially defined maps)! Composibility is an axiom for *coercions*, but not for *conversions*. Actially conversion have no axioms worth mentioning. Yes, I shouldn't have said insist above, maybe suggest. I don't know if the fact that we have maps K[x]/(f) - K[x]/(g) for arbitrary f, g (or Z/mZ - Z/nZ for arbitrary m, n) was intended or just a side effect of not doing any checks and just doing conversions if the implementation allows it. However, I'm really not convinced that it is easy for the average user to avoid the idea that such conversions apparently have some mathematical meaning, just because Sage makes it so easy to do these conversions. Hence, it would be perfectly fine to have K[x]/(f)-K[x] and K[x]-K[x]/(g) for *all* elements of the involved rings, but no conversion from K[x]/(f)-K[x]/(g), or actually: Only conversion for polynomials of degree zero. Yes, that is more or less what I wanted to say. That said, do you really think one should do more or less expensive operations (such as: compute gcd or a prime factorisation) to verify that an easy conversion rule makes sense in a particular case? I don't know of an example where one has to do such expensive things; in the above examples, it is just a divisibility check. In the examples of K[x]/(f) and Z/mZ, this divisibility check does look like a good sanity check to me. You say it should not be pushed further than reasonable. Is it reasonable to have an expensive test to decide whether or not to apply an easy conversion rule? Recall: Coercions should obey axioms and thus expensive tests may be needed. But why should one have expensive tests for something that has no defined properties? How *could* you possibly test something that has no defined properties? It depends on how expensive the test is. If I may play the devil's advocate for a moment: why would you want to *use* something that has no defined properties? Hence I think that R(h), for R = K[x]/(g) and h in L[x]/(f), should only be allowed when K has a canonical map to L and either f divides g (in L[x]) or vice versa. Do you claim we should drop conversion entirely? After all, we also have a *coercion* system, and coercions (as opposed to conversions) are supposed to be mathematically consistent. Do you think this would be reasonable? No, I am certainly not suggesting that conversion should go away. Using conversion to find preimages in the maps Z - Z/mZ or K[x] - K[x]/(f) is something that makes a lot of sense. But when it comes to being maximally permissive in allowing conversions (allowing for example direct conversions Z/3Z - Z/2Z): despite some arguments for it (less checking, allowing people who really want to do that to save a few keystrokes) I don't see why it is a good thing. If there is a coercion map then conversion must give the same result. Everything else would be a bug. At least there is no misunderstanding about that! Best regards, Peter -- You received this message because you are subscribed to the Google Groups sage-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/groups/opt_out.
[sage-devel] Re: problems with testing sandpiles.py on 6.1b2
Yes, agree that this is slightly different. I also agree that it would be much better if the pexpect interfaces would have been written against non-echoing tty. Of course that'll only work if the subprocess doesn't turn echoing back on. The downside is of course that you have to rewrite the way the pexpect interface works. The alternative is to get rid of pexpect altogether, and patch the subprocess input to instead setup read from a socket (and flush on command). I did some work in that direction but haven't had the time to finish it... On Monday, December 30, 2013 7:09:19 PM UTC, Nils Bruin wrote: On Monday, December 30, 2013 9:09:38 AM UTC-8, Volker Braun wrote: That is exactly what I am talking about: The pexpect pattern must match at a point where the subprocess has stopped processing and is just waiting for further input. But if you match for a prompt, say, and then the subprocess spits out another space or return before it waits for input then the additional characters will end up in random places in the output stream. I still have trouble fitting that diagnosis to what I observe: Parent: P1: write(26, print(sage698);, 15)= 15 P2: select(0, NULL, NULL, NULL, {0, 0}) = 0 (Timeout) P3: write(26, \n, 1) = 1 Child: C1: read(0, print(sage698);\n, 1024) = 16 C2: write(1, x2^2-x0^2, 9)= 9 C3: write(1, \n, 1) = 1 C4: write(1, , 2) = 2 Parent: P4: read(26, print(sage698);x2^2-x0^2\r\n \r\n, 1000) = 30 The fact that C1 ends in a newline sugggests that both P1 and P3 have completed before C1 executes. However, P4 suggests that it is seeing the result of echo of P1, C2, C3, C4, echo of P3. That looks more like a race condition between the pty echo and the output produced by the child. I don't see how pexpect could avoid this. from happening (and it's rare indeed). Actually, it seems to me that the pty is being a bit careless: shouldn't it make sure that it's done echoing before it actually delivers the character to the process? There is no such guarantee, pexpect is akin to typing into a terminal. If echo is on then the typed key shows up immediately and in-between stdout. Hm, I've never seen a key I typed end up being echoed *after* the next prompt had printed, and I thought this was because the prompt was printed only *after* the input was received, hence *after* the echo had completed. That is not quite the same scenario as two processes writing to the same stream in an uncoordinated way. Possibly, if we merge P1 and P3 we can significantly reduce the probability of this happening, since then P1+P3 will arrive together at the pty, so it has a better chance of echoing them both. Alternatively, if we get rid of the echo altogether, then there is nothing to have a race condition with, so this particular issue might be going away completely. On #15440 it also seems to be an *echoed* space that gets in the way, so it might help there too. Preliminary tests suggest: sage: import pexpect sage: child=pexpect.spawn(maxima) sage: child.expect((%i.*) ) 0 sage: child.sendline(1+2;) 5 sage: child.readline() '1+2;\r\n' sage: child.readline() '(%o1) 3\r\n' whereas with echo turned off: sage: import pexpect sage: child=pexpect.spawn(maxima) sage: child.setecho(0) sage: child.expect((%i.*) ) 0 sage: child.sendline(1+2;) 5 sage: child.readline() '(%o1) 3\r\n' -- You received this message because you are subscribed to the Google Groups sage-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/groups/opt_out.
[sage-devel] Re: Conversions without mathematical sense between polynomial rings
I sometimes illustrate techniques of polynomial factorization in a class by starting with a polynomial in ZZ[x] and converting it to a polynomial in GF(insert_large_prime)[x]. I realize this is not an instance of two _finite_ fields, but (a) you mention the characteristics are different, and (b) _are_ there similar applications for finite fields, when the characteristics are different? Or am I misunderstanding the proposal? john perry On Tuesday, December 31, 2013 6:59:03 AM UTC-5, Jean-Pierre Flori wrote: Dear all, Currently one can obtain surprising results in Sage when converting polynomial over finite fields (or elements of quotient rings of univariate polynomial ring even though tht's not the primary concern of the ticket). See http://trac.sagemath.org/ticket/11239 Basically the generators of the finite fields are just exchanged, even when the characteristics are different. The changes suggested in this ticket would more or less only let a coercion be used if there is one, and rant if the characteristics are different or the base finite fields are not part of a common lattice when there is no canonical embedding. Note though that the original disturbing conversion fits with the current behavior described in the doc: * http://www.sagemath.org/doc/reference/coercion/index.html#maps-between-parents where it's stated that Conversions need not be canonical (they may for example involve a choice of lift) or even make sense mathematically (e.g. constructions of some kind). Anyone has a strong opinion about what we should let Sage do in such a situation? Should we leave the old conversion when there is no coercion even though that might easily to wrong mathematical results for a careless user? Or never look for a coercion unless the user explicitly asks for it (i.e. what the situation is currently in Sage without the patches of the ticket)? Best, JP -- You received this message because you are subscribed to the Google Groups sage-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/groups/opt_out.
[sage-devel] Re: problems with testing sandpiles.py on 6.1b2
On Tuesday, December 31, 2013 8:23:35 PM UTC-8, Volker Braun wrote: I also agree that it would be much better if the pexpect interfaces would have been written against non-echoing tty. Of course that'll only work if the subprocess doesn't turn echoing back on. The downside is of course that you have to rewrite the way the pexpect interface works. I tried: With maxima it's easy to inset a self._pexpect.setecho(0) in the _start method. The echo line is explicitly read by a .readline() and deactivating that is enough to get doctests to pass. For other interfaces (especially singular) it's not so easy: The standard eval in interfaces/pexpect returns the string between the first \n and the last \r, so it actually uses the echoed \n to delineate the returned result. Making that work without echo requires a little more work than I tried. It does seem that at least singular, maxima, gp, don't turn echoing back on. Gap does seem to turn it back on. -- You received this message because you are subscribed to the Google Groups sage-devel group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/groups/opt_out.