> Date: Wed, 27 Apr 2011 11:16:26 +0200
> From: jonat...@k-m-p.nl
> To: r-help@r-project.org
> Subject: [R] Speed up plotting to MSWindows graphics window
>
> Hello,
>
> I am working on a project analysing the performance of motor-vehicles
> through messages logged over a CAN bus.
>
> I am using R 2.12 on Windows XP and 7
>
> I am currently plotting the data in R, overlaying 5 or more plots of
> data, logged at 1kHz, (using plot.ts() and par(new = TRUE)).
> The aim is to be able to pan, zoom in and out and get values from the
> plotted graph using a custom Qt interface that is used as a front end to
> R.exe (all this works).
> The plot is drawn by R directly to the windows graphic device.
>
> The data is imported from a .csv file (typically around 100MB) to a matrix.
> (timestamp, message ID, byte0, byte1, ..., byte7)
> I then separate this matrix into several by message ID (dimensions are
> in the order of 8cols, 10^6 rows)
>
> The panning is done by redrawing the plots, shifted by a small amount.
> So as to view a window of data from a second to a minute long that can
> travel the length of the logged data.
>
> My problem is that, the redrawing of the plots whilst panning is too
> slow when dealing with this much data.
> i.e.: I can see the last graphs being drawn to the screen in the
> half-second following the view change.
> I need a fluid change from one view to the next.
>
> My question is this:
> Are there ways to speed up the plotting on the MSWindows display?
> By reducing plotted point densities to *sensible* values?

Well, hard to know but it would help to know where all the time is going.
Usually people start complaining when VM thrashing is common but if you are
CPU limited you could try restricting the range of data you want to plot
rather than relying on the plot to just clip the largely irrelevant points
when you are zoomed in. It should not be too expensive to find the
limits either incrementally or with binary search on ordered time series. 
Presumably subsetting is fast using  foo[a:b,] 

One thing you may want to try for change of scale is wavelet  or
multi-resolution analysis. You can make a tree ( increasing memory usage
but even VM here may not be a big penalty if coherence is high ) and
display the resolution appropriate for the current scale. 




> Using something other than plot.ts() - is the lattice package faster?
> I don't need publication quality plots, they can be rougher...
>
> I have tried:
> -Using matrices instead of dataframes - (works for calculations but not
> enough for plots)
> -increasing the max usable memory (max-mem-size) - (no change)
> -increasing the size of the pointer protection stack (max-ppsize) - (no
> change)
> -deleting the unnecessary leftover matrices - (no change)
> -I can't use lines() instead of plot() because of the very different
> scales (rpm-10000, flags -1to3)
>
> I am going to do some resampling of the logged data to reduce the vector
> sizes.
> (removal of *less* important data and use of window.ts())
>
> But I am currently running out of ideas...
> So if sombody could point out something, I would be greatfull.
>
> Thanks,
>
> Jonathan Gabris
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
                                          
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to