And you can't get much more detail with hypre because it does not record
performance data. Or can you get hypre to print its performance data?
ML uses more PETSc stuff, you can get the PtAP time, which is most of the
matrix setup.
GAMG is native and has more timers. In addition to PtAP there is P
On 31/08/16 16:46, Mark Adams wrote:
> And you can't get much more detail with hypre because it does not
> record performance data. Or can you get hypre to print its performance
> data?
-pc_hypre_boomeramg_print_statistics -pc_hypre_boomeramg_print_debug
Should give you some info, not sure it's
On Wed, Aug 31, 2016 at 5:23 AM, Justin Chang wrote:
> Matt,
>
> So is the "solve phase" going to be KSPSolve() - PCSetUp()?
>
Setup Phase: KSPSetUp + PCSetup
Solve Phase: SNESSolve
This contains SNESFunctionEval, SNESJacobianEval, KSPSolve
Matt
In other words, if I want to look at time
Matt,
So is the "solve phase" going to be KSPSolve() - PCSetUp()?
In other words, if I want to look at time/iterations, should it just be
over KSPSolve or should I exclude the PC setup?
Justin
On Wed, Aug 31, 2016 at 5:13 AM, Matthew Knepley wrote:
> On Wed, Aug 31, 2016 at 2:01 AM, Justin C
On Wed, Aug 31, 2016 at 2:01 AM, Justin Chang wrote:
> Attached is the -log_view output (from firedrake). Event Stage 1:
> Linear_solver is where I assemble and solve the linear system of equations.
>
> I am using the HYPRE BoomerAMG preconditioner so log_view cannot "see
> into" the exact steps,
Attached is the -log_view output (from firedrake). Event Stage 1:
Linear_solver is where I assemble and solve the linear system of equations.
I am using the HYPRE BoomerAMG preconditioner so log_view cannot "see into"
the exact steps, but based on what it can see, how do I distinguish between
thes
Mark Adams writes:
>>
>>
>> Anyway, what I really wanted to say is, it's good to know that these
>> "dynamic range/performance spectrum/static scaling" plots are designed to
>> go past the sweet spots. I also agree that it would be interesting to see a
>> time vs dofs*iterations/time plot. Would
>
>
> Anyway, what I really wanted to say is, it's good to know that these
> "dynamic range/performance spectrum/static scaling" plots are designed to
> go past the sweet spots. I also agree that it would be interesting to see a
> time vs dofs*iterations/time plot. Would it then also be useful to l
Thanks everyone. I still think there is a even better phrase for this,
like, static scaling? Because unlike strong/weak scaling, concurrency is
fixed (hence "static") and we only scale the problem, so this is a mix
between strong and weak scaling.
¯\_(ツ)_/¯
Anyway, what I really wanted to say is,
Mark Adams writes:
> I would guess it is the latter.
In this case, definitely.
> It is hard to get "rollover" to the right. You could get it on KNL
> (cache configuration of HBM) when you spill out of HBM.
Yes, but the same occurs if you start repeatedly spilling from some
level of cache, whi
On Tue, Aug 23, 2016 at 4:33 AM, Justin Chang wrote:
> Redid some of those experiments for 8 and 20 cores and scaled it up to
> even larger problems. Attached is the plot.
>
> Looking at this "dynamic plot" (if you ask me, I honestly think there
> could be a better word for this out there), the l
Justin Chang writes:
> Redid some of those experiments for 8 and 20 cores and scaled it up to even
> larger problems. Attached is the plot.
>
> Looking at this "dynamic plot" (if you ask me, I honestly think there could
> be a better word for this out there),
"performance spectrum"?
> the line
Redid some of those experiments for 8 and 20 cores and scaled it up to even
larger problems. Attached is the plot.
Looking at this "dynamic plot" (if you ask me, I honestly think there could
be a better word for this out there), the lines curve up for the smaller
problems, have a "flat line" in th
Thanks all. So this issue was one of our ATPESC2015 exam questions, and
turned some friends into foes. Most eventually fell into the strong-scale
is harder camp, but some of these "friends" also believed PETSc is *not*
capable of handling dense matrices and is not portable. Just wanted to hear
some
Hi Justin,
I have seen some people claim that strong-scaling is harder to achieve
than weak scaling
(e.g.,
https://www.sharcnet.ca/help/index.php/Measuring_Parallel_Scaling_Performance)
and generally speaking it makes sense - communication overhead increases
with concurrency.
However, we know
>
>
>> I have seen some people claim that strong-scaling is harder to achieve
>> than weak scaling (e.g., https://www.sharcnet.ca
>> /help/index.php/Measuring_Parallel_Scaling_Performance) and generally
>> speaking it makes sense - communication overhead increases with concurrency.
>>
>
I would bac
On Sun, Aug 21, 2016 at 9:38 PM, Justin Chang wrote:
> Hi all,
>
> This may or may not be a PETSc specific question but...
>
> I have seen some people claim that strong-scaling is harder to achieve
> than weak scaling (e.g., https://www.sharcnet.ca/help/index.php/Measuring_
> Parallel_Scaling_Per
Hi all,
This may or may not be a PETSc specific question but...
I have seen some people claim that strong-scaling is harder to achieve than
weak scaling (e.g.,
https://www.sharcnet.ca/help/index.php/Measuring_Parallel_Scaling_Performance)
and generally speaking it makes sense - communication over
18 matches
Mail list logo