Dear Matt, Thank you again for your reply. I have been able to find cope1 files for single subject task contrasts (e.g. cope1 file for working memory contrasts of subject 996782), but not for the S900 group (e.g. I have not been able to find a cope1 file for the S900 group for working memory contrasts).
I was wondering: a) Is there any task contrast effect size map available for the S900 group? (even if they are not optimally scaled) b) If not, would it be possible to generate a task contrast effect size map by using available S900 group data (e.g. the task contrasts zstat maps of the S900 group), or would it be necessary to go back to the data of each individual subject? c) If it is necessary to go back to the data of each individual subject, which approach would you suggest to combine all cope1 files of each subject of the S900 group into one effect size map of all subjects? Would something like normalizing the cope1 file of each subject (using wb_command as written below) and then averaging all normalized cope1 files work? Or would something as simple as averaging all cope1 files work? wb_command -cifti-reduce <input> MEAN mean.dtseries.nii wb_command -cifti-reduce <input> STDEV stdev.dtseries.nii wb_command -cifti-math '(x - mean) / stdev' <output> -fixnan 0 -var x <input> -var mean mean.dtseries.nii -select 1 1 -repeat -var stdev stdev.dtseries.nii -select 1 1 -repeat Thank you very much, Xavier. ________________________________ From: Glasser, Matthew [glass...@wustl.edu] Sent: Thursday, January 26, 2017 6:53 PM To: Xavier Guell Paradis; hcp-users@humanconnectome.org Subject: Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance? The files called cope1 or beta are an effect size measure, however the released versions are not optimally scaled (because of a non-optimal intensity bias field correction). We plan to correct this in the future. Peace, Matt. From: Xavier Guell Paradis <xavie...@mit.edu<mailto:xavie...@mit.edu>> Date: Thursday, January 26, 2017 at 5:41 PM To: Matt Glasser <glass...@wustl.edu<mailto:glass...@wustl.edu>>, "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>> Subject: RE: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance? Dear Matt, Thank you very much for your very helpful reply. I will have to investigate this topic more, but is there any approach you would suggest to obtain effect size maps from the S900 group HCP data? I was wondering if the zstat data of the S900 group task contrasts could be converted to effect size values without having to go back to the individual subjects. Thank you very much, Xavier. ________________________________ From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>] Sent: Thursday, January 26, 2017 5:33 PM To: Xavier Guell Paradis; hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org> Subject: Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance? Standard error scales with sample size, standard deviation does not. Things like Z, t, and p all also scale with sample size and are measures of statistical significance via various transformations. Thus for a large group of subjects, Z and t will be very high and p will be very low. Z, t and p are thus all not biologically interpretable, as their values also depend on the amount and quality of the data. In the limit with infinite amounts of data, the entire brain will be significant for any task, but wether a region is statistically significant tells us little about its importance functionally. Measures like appropriately scaled GLM regression betas, %BOLD change, or Cohen’s d are biologically interpretable measures of effect size because their values should not change as sample size and data amount go up (rather the uncertainty on their estimates goes down). Regions with a large effect size in a task are likely important to that task (and will probably also meet criteria for statistical significance given a reasonable amount of data). A common problem in neuroimaging studies is showing thresholded statistical significance maps rather than effect size maps (ideally unthresholded with an indication of which portions meet tests of statistical significance), and in general focusing on statistically significant blobs rather than the effect size in identifiable brain areas (which should often show stepwise changes in activity at their borders). Peace, Matt. From: <hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>> on behalf of Xavier Guell Paradis <xavie...@mit.edu<mailto:xavie...@mit.edu>> Date: Thursday, January 26, 2017 at 3:46 PM To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" <hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>> Subject: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance? Dear HCP team, I have seen that the zstat values for tasks contrasts are very large in the HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar.nii file, to the point that one can observe areas of activation in task contrasts by setting very high z value thresholds (e.g., a z threshold of +14). I think (please correct me if I'm wrong) that the z values of the S900 file are very large because the group is very large, therefore the standard deviation is very small (given that there will be less variability in a group if one takes a very large group of people rather than a small group of people), and if the standard deviation is very small then even small differences from the mean will lead to very large z values. I was wondering what implication does this have in terms of statistical significance. A z value of 14 or larger would correspond to an extremely small p value, i.e. it would be extremely unlikely to observe by chance a measure which is 14 times the standard deviation away from the mean. Would it therefore be correct to assume that the areas that we can observe in the S900 tfMRI_ALLTASKS task contrasts with a very high zstat threshold (e.g., 14) are statistically significant, without having to worry about multiple comparisons or family structure? Thank you very much, Xavier. _______________________________________________ HCP-Users mailing list HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org> http://lists.humanconnectome.org/mailman/listinfo/hcp-users ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. _______________________________________________ HCP-Users mailing list HCP-Users@humanconnectome.org http://lists.humanconnectome.org/mailman/listinfo/hcp-users