One way to speed it up might be to work with RDF or QQ or RLF. The 
documentation for the generic determinant method says, "Note that for 
matrices over most rings, more sophisticated algorithms can be used." 

sage: %time ones_matrix(RDF, 600, 600).determinant()
CPU times: user 78.6 ms, sys: 7.6 ms, total: 86.2 ms
Wall time: 77.9 ms
0.0
sage: %time ones_matrix(QQ, 600, 600).determinant()
CPU times: user 280 ms, sys: 36.3 ms, total: 317 ms
Wall time: 325 ms
0
sage: %time ones_matrix(RLF, 600, 600).determinant()
CPU times: user 32.7 s, sys: 183 ms, total: 32.9 s
Wall time: 40.3 s
0


On Thursday, January 5, 2023 at 1:26:12 AM UTC-8 ivana...@gmail.com wrote:

> I am doing a numerical analysis which needs to calculate many determinants 
> of "huge" dense matrices. The sizes of the matrix are less than 100*100.
>
> I found the computation of the determinant is quite slow with 
> Matrix_generic_dense. I followed the method on What is the time 
> complexity of numpy.linalg.det? 
> <https://stackoverflow.com/questions/72206172/what-is-the-time-complexity-of-numpy-linalg-det>
>  
> and made minor modifications to the script to benchmark for Sagemath.
> 1. np.arange(1,10001, 100) => np.arange(1, 292, 10) because the speed is 
> too slow, I have to reduce the testing range to make the script finish 
> within a reasonable time.
> 2. np.ones((size, size))  => ones_matrix(RR, size, size)
> 3. timeit('np.linalg.det(A)', globals={'np':np, 'A':A}, number=1) => 
> timeit('A.det(algorithm="df")', globals={'A': A}, number=1)
>
> Here is the full script I used.
>
> from timeit import timeit
>
> import matplotlib.pyplot as plt
> import numpy as np
> from sage.rings.real_mpfr import RR
> from sklearn.linear_model import LinearRegression
> from sklearn.preprocessing import PolynomialFeatures
>
> sizes = np.arange(1, 292, 10)
> times = []
>
> for size in sizes:
>     A = ones_matrix(RR, size, size)
>     time = timeit('A.det(algorithm="df")', globals={'A': A}, number=1)
>     times.append(time)
>     print(size, time)
>
> sizes = sizes.reshape(-1, 1)
> times = np.array(times).reshape(-1, 1)
>
> quad_sizes = PolynomialFeatures(degree=2).fit_transform(sizes)
> quad_times = LinearRegression().fit(quad_sizes, times).predict(quad_sizes)
> cubic_sizes = PolynomialFeatures(degree=3).fit_transform(sizes)
> cubic_times = LinearRegression().fit(cubic_sizes, 
> times).predict(cubic_sizes)
> quartic_sizes = PolynomialFeatures(degree=4).fit_transform(sizes)
> quartic_times = LinearRegression().fit(quartic_sizes,
>                                        times).predict(quartic_sizes)
>
> plt.scatter(sizes, times, label='Data', color='k', alpha=0.5)
> plt.plot(sizes, quad_times, label='Quadratic', color='r')
> plt.plot(sizes, cubic_times, label='Cubic', color='g')
> plt.plot(sizes, quartic_times, label='Quartic', color='b')
> plt.xlabel('Matrix Dimension n')
> plt.ylabel('Time (seconds)')
> plt.legend()
> plt.show()
>
> This is the time complexity diagram on the linked post. The data from the 
> testing with Numpy. It only took about 20 seconds to calculate a 
> 10000*10000 matrix.
> [image: numpy.png]This is the testing with matrix(RR) with the df 
> algorithm.
> [image: df.png]
> I also tested with the "hessenberg" algorithm which got worse.
> [image: df.png]
> Compared to Numpy, Sagemath spent 800 seconds calculating a 300*300 
> matrix. It seems that Sagemath still runs the algorithm under O(n^3) (the 
> curves of the cubic model and the quartic model overlap) but with a large 
> constant. Why Sagemath is so slow? Is it possible to speed up?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-support+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-support/6c5c2d95-7521-4548-8335-2a3064e180ben%40googlegroups.com.

Reply via email to