You're welcome.

On Friday, April 5, 2013 2:15:06 PM UTC-4, Oliver Smith wrote:
>
> Awesome - oddly one of my three implementations looked almost exactly like 
> your code: I was using 'time in parsedData' and I wasn't pushing nulls - I 
> was just creating a new Array(n).
>
> Adding the nulls and using Date objects for the time line results in a 
> nicely rendered multi-line graph.
>
> Thank you :)
>
> -Oliver
>
> On Friday, April 5, 2013 6:59:51 AM UTC-7, asgallant wrote:
>>
>> This problem is quite similar to one I have dealt with before.  This may 
>> not be the most efficient way to handle the problem, but it should work:
>>
>> var rawData = {};
>> var parsedData = {};
>> var colMap = {};
>> var i = 1;
>> var data = new google.visualization.DataTable();
>> data.addColumn('number', 'seconds');
>> // build out the columns
>> for (var x in rawData) {
>>     data.addColumn('number', x);
>>     colMap[x] = i;
>>     i++;
>> }
>> // fill in the rows
>> for (var x in rawData) {
>>     i = colMap[x];
>>     for (var j = 0; j < rawData[x].length; j++) {
>>         var seconds = rawData[x][j][0];
>>         var value = rawData[x][j][1];
>>         // if we don't yet have data at this timestamp, add new data
>>         if (!(parsedData[seconds] instanceOf Array)) {
>>             parsedData[seconds] = [seconds];
>>             for (var y in colMap) {
>>                 parsedData[seconds].push(null);
>>             }
>>         }
>>         parsedData[seconds][i] = value;
>>     }        
>> }
>> // populate DataTable
>> for (var x in parsedData) {
>>     data.addRow(parsedData[x]);
>> }
>> // sort the DataTable by time
>> data.sort(0);
>>
>> On Friday, April 5, 2013 2:59:22 AM UTC-4, Oliver Smith wrote:
>>>
>>> The data is benchmarking info, sampled per thread. So if there are three 
>>> threads, maybe:
>>>
>>> { 3 : [ [ 100, 10 ], [ 175, 90 ], [ 231, 18 ], [ 400, 190 ] ], 7 : [ 
>>> [50, 12 ], [75, 8], [105, 6], [123, 7], [133, 7], [174, 15], [250, 1] ], 11 
>>> : [ 33, 12 ], [ 66, 25 ], [ 99, 9 ], [ 120, 3 ], [ 130, 4 ], [ 180, 2 ], [ 
>>> 211, 1], [ 231, 2 ], [ 245, 5 ] ]}
>>>
>>> Where 3, 7 and 11 are threads that were tracking.
>>>
>>> There will tend to be more data points as samples will tend to last 
>>> between 30 seconds and 5 minutes.
>>>
>>> If I feed any one of the threads data into draw, a nice graph results, 
>>> although I should probably include the base timestamp so I can use dates :) 
>>> I can merge them all into one big dataset with a lot of manipulation and a 
>>> lot of empty cells. I'm perhaps balking at that because I'm working on 
>>> optimization, ergo the dataset...
>>>
>>> Oliver
>>> On Apr 4, 2013 7:22 PM, "asgallant" <[email protected]> wrote:
>>>
>>>> So you have an arbitrary number of data series, which may or may not 
>>>> share common x-axis values, and you are trying to figure out how to add 
>>>> them to a DataTable so you can draw your chart; do I have that correct?  
>>>> If 
>>>> so, then I think the answer is going to depend heavily on the form in 
>>>> which 
>>>> you obtain the data to begin with.  What is the structure of the raw data?
>>>>
>>>> On Thursday, April 4, 2013 8:09:14 PM UTC-4, Oliver Smith wrote:
>>>>>
>>>>> I've spent a while searching, and I see a number of possible ways to 
>>>>> do this, but none of them feels right, and this seems like something that 
>>>>> ought to be so common that perhaps myself and the folks who've previously 
>>>>> touched on it aren't quite getting it.
>>>>>
>>>>> I have an arbitrary number of series of [time, sample] datasets. I 
>>>>> want to draw a line chart with each. While the time values will be within 
>>>>> the same ranges, they will rarely have the same values, and the times are 
>>>>> in miliseconds relative to the start of the sample.
>>>>>
>>>>> A given series readily produces a nice, simple, line chart of it's 
>>>>> own, what I want to do is draw N (between 1 and 255).
>>>>>
>>>>> While I can see a few ways to do this, I'd kind of like to find the 
>>>>> right way.
>>>>>
>>>>> My instinct is to look for a "draw series" method, because creating 
>>>>> rows with 255 columns where the chances of any given row having more than 
>>>>> one entry ... feels so wrong.
>>>>>
>>>>> I tried using join, but got very confused very quickly and it rapidly 
>>>>> got really slow.
>>>>>
>>>>> I looked at using addColumn/addRow/setCell; addRow would work if I 
>>>>> knew how many series there were going to be at development time, setCell 
>>>>> just feels like it's going to be insanely expensive.
>>>>>
>>>>> Any points or advice appreciated.
>>>>>
>>>>> (And after this, it's off to figure out if I should use a gannt style 
>>>>> visualisation)
>>>>>
>>>>  -- 
>>>> You received this message because you are subscribed to a topic in the 
>>>> Google Groups "Google Visualization API" group.
>>>> To unsubscribe from this topic, visit 
>>>> https://groups.google.com/d/topic/google-visualization-api/KJiyMccn9PI/unsubscribe?hl=en
>>>> .
>>>> To unsubscribe from this group and all its topics, send an email to 
>>>> [email protected].
>>>> To post to this group, send email to [email protected].
>>>> Visit this group at 
>>>> http://groups.google.com/group/google-visualization-api?hl=en.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>  
>>>>  
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Google Visualization API" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at 
http://groups.google.com/group/google-visualization-api?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to