On 4/12/06, Bob Grossman <[EMAIL PROTECTED]> wrote: > I strongly agree with Annette about learning styles as they have been > developed and discussed. It seems to me however, that there is > evidence for one "ability" or "learning style" that I haven't heard much > about since graduate school (many, many years ago). That is the > relationship between scores on tests like Block Design or the Minnesota > Form Board and grades or performance in Engineering and Physics. I > haven't looked at any recent data on this and can't find my old > references. Does anyone have data on this relationship or references? > It seems to me that some correlations were as high as .70.
It's my personal opinion that the term "learning styles" is too associated with those overly simplistic non-supported notions to be useful in a context like this. I'd refer to this as simply an "ability". In addition, I believe that using the term "learning styles" for something like this sidetracks us from what should be the central question in education: how do we help to develop this ability in people who don't have it? > This has relevance because I have been interested in helping > students do better in the natural sciences and one of the skills needed > seems to be related to reading and interpreting graphs, tables, and > visual or three dimensional models. Most of my colleagues in natural (snip) > I have also noticed when ever I present data or a graph to my > colleagues in the Humanities I immediately lose most of my audience. > Many of the brightest of these colleagues report that they are unable to > make sense out of graphically presented information. In my probability and statistics courses, students work with SPSS to write reports. The first report requires them simply to summarize results from a couple of variables for the sample as a whole - to report frequencies for some of the variables, citing specific numbers from the tables. Almost all can manage that. The second report requires them to make comparisons between two groups. They have to do a means table, and write a comparison of the means of the two groups for some variable, and to do two crosstabulation tables with percentages, and compare the percentages of the two groups that have some characteristic. Here about 15-20% of students hit the wall. When I ask them who their two groups are, those students typically say something like "People who are male and female and people who smoke". Or if I ask them who they are comparing, they say something like "I'm comparing the respondents' gender with whether or not they have health insurance". Then I get reports with all sorts of wild tables that don't even have those variables in them. I believe that the problem has to do with the understanding of variables. If we take those student responses at face value, the students are showing that they don't know the difference between a group of people and a variable (a conclusion supported by the fact that when asked what their independent variable is, students often reply "The independent variable was the people who got the treatment"). Now, students obviously know what a group of people is, and that's a concrete thing. I think it's the abstract nature of variables that gets in their way. That's reinforced by the fact that so many people say things like "you can't measure (X)" (where X is some non-physical variable like "ambition", or "drive" or "love") or "Human beings are complex: you can't separate out the effect of X on their behavior from all the other things that affect behavior". I almost think that we need a "Manhattan Project" for teaching people about variables. Paul Smith Alverno College Milwaukee --- To make changes to your subscription go to: http://acsun.frostburg.edu/cgi-bin/lyris.pl?enter=tips&text_mode=0&lang=english
