>> * the blue line for interact (the server is computing) is not very
>> intuitive for me, it took me a while to realize
>> that this is what it means
>
>
> Originally we put it there since we can have nested interacts, and the blue
> line helps visually separate the nesting. The other day I thought that it
> was confusing to see that blue line, so we'll probably remove it and try
> something different to see nested interacts.
>
> https://github.com/sagemath/sagecell/issues/254
Ah --- so I got it wrong. I thought that the blue line means that the
server is computing.
I didn't realize that it actually never disappears and it is there all the time.
>
>
>>
>> * sometimes it seems slow, i.e. this script:
>>
>> from sympy import var
>> var("x")
>> print x-x
>> print x**2
>>
>> takes 5 seconds to evaluate. If you try the same thing here:
>> http://live.sympy.org/,
>> it is immediate.
>
>
> Can you explain the architecture behind live.sympy.org?
I think that currently it just evaluates the expressions in the given
namespace and stores (pickles) the results in a database.
But I have also experimented with a separate GAE account
that runs the engine that only evaluate the expression
and is connected to a GAE account that serves the pages.
I didn't see any big slowdown.
>
> The slowdown you are experiencing is (I think) because importing anything
> from sympy is sometimes horribly slow. For example, your first line takes
> my laptop 3 seconds or so on a local copy of sage the first time I tried it.
> Do you know of a way that can be improved? Do you know of a way we can
> "preload" this import without messing up the global Sage namespace?
So there must be something wrong with SymPy in Sage. With the
latest sympy git master, we have a nice script for measuring the import time:
$ bin/test_import
Note: the first run (warm up) was not included in the average + std dev
All runs (including warm up):
[2.40106582642, 0.221256017685, 0.2110009193419,
0.2103259563451, 0.2091488838199, 0.2195649147029,
0.210291862488, 0.2112379074101, 0.2108719348911,
0.2103409767149, 0.2111039161681, 0.2101390361791,
0.2114388942719, 0.211920976639, 0.2224760055539,
0.2095551490779, 0.2114191055301, 0.2248861789699,
0.2091879844669, 0.2151539325709, 0.2185208797451,
0.2102448940281, 0.209927082062, 0.2118740081791,
0.2113790512079, 0.2176508903499, 0.21023106575,
0.2108440399169, 0.2192609310149, 0.218972921371,
0.2282600402831, 0.211175918579, 0.2191729545589,
0.2290639877321, 0.2192230224611, 0.211779117584,
0.213361978531, 0.241024017334, 0.2144789695741, 0.219763040543,
0.210104942322, 0.212917089462, 0.2196609973911,
0.2263648509979, 0.215436935425, 0.2111830711359,
0.2220108509061, 0.210973024368, 0.2102708816531,
0.2176859378809, 0.2101359367371]
Number of tests: 50
The speed of "import sympy" is: 0.215285 +- 0.006493
This is on my slower laptop --- so it is not ideal, but ok. However,
the first run did take a long time. Note that the above
always runs the following script from scratch using python using pexpect:
from timeit import default_timer as clock
from get_sympy import path_hack
path_hack()
t = clock()
import sympy
t = clock()-t
print t
So I assume that the linux kernel caches the files that are read from
the disk. Python cannot cache anything, as it
is always run from scratch.
If you run Sage, import sympy, then close Sage, run Sage again and
import sympy, does it always take ~ 3s?
>
>
>
>>
>> * the interact could update immediately when dragging the spinner --
>> assuming the round trip
>> can be brought to around 1s.
>
>
> That would be really nice. It would be especially nice if we used something
> like long polling or GAE channels or socket.io, where the communication
> didn't require multiple requests hammering the server.
>
>
>
>>
>> * I tried many of the examples from here:
>> http://wiki.sagemath.org/interact (I know
>> I can use the Sage notebook, but I always forget the password and so on,
>> this
>> cell server is much better for experimenting)
>
>
> +1. These days, I'm opening the cell server much more often than the sage
> notebook for quick one-off calculations. In fact, usually I use bc to do
> small arithmetic, but I find myself using the cell server more for that
> these days.
For my workflow, the cell server is the right architecture. The way I work
is that many times I just create a script in /tmp/a.py and do some
quick scripting, rather than run ipython, as I really want to
write the whole script, as opposed to just commands. If it is
more permanent script, then I have it in my computation directory
and I just modify it each time a little bit.
I also need to access files on my computer (to plot stuff for example),
so I guess eventually it would be nice to be able to easily
upload a file and access it from the single cell.
Another idea is to allow