Re: [HCP-Users] MATLAB libraries in Resting State Stats script

2018-09-19 Thread Matthew Brett
Hi,

On Wed, Sep 19, 2018 at 8:13 PM, Glasser, Matthew  wrote:
> normalise is the custom function, not rms.

I know - I was just hinting that maybe it would be worth dropping the
requirement for the signal toolbox, by using that one-liner.  Or maybe
there are more signal toolbox functions you need?

Cheers,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] MATLAB libraries in Resting State Stats script

2018-09-19 Thread Matthew Brett
Hi,

On Wed, Sep 19, 2018 at 7:52 PM, Glasser, Matthew  wrote:
> It is a custom function that I already pointed to.

Am I right in thinking that the rms function boils down to:

sqrt(mean(x .* conj(x), d))

where d is the dimension along which you want to operate (defaulting to 1)?

https://uk.mathworks.com/help/signal/ref/rms.html

Cheers,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Surface parcellation to volume

2016-08-15 Thread Matthew Brett
Hi Arno,

On Mon, Aug 15, 2016 at 1:59 PM, Arno Klein  wrote:
> Dear [another] Matthew and others,
>
> Thank you for the interesting discussion. I was not on the HCP mailing
> list*,

[snip]

> *It's a shame people don't use publicly accessible forums for scientific
> debate.  Isn't that what Neurostars.org and other stack overflow forks are
> for?

Thanks for the interesting discussion.   On the Neurostars /
Stackoverflow thing - I think those work well where there is one
definitive or semi-definitive answer to a question, but I find mailing
lists much better for - discussion.  I believe HCP-users does have a
mailing list archive, but that seems to be inaccessible to me at the
moment:

http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Cheers,

Matthew B
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] HCP or HCP-like data that can be publicly distributed?

2015-07-06 Thread Matthew Brett
Yo,

On Mon, Jul 6, 2015 at 8:39 PM, Ariel Rokem  wrote:
> Hi everyone,
>
> As part of the Google Summer of Code, I am mentoring a student (Rafael
> Henriques, cc) who is implementing diffusion kurtosis imaging as part of the
> Dipy open-source library (http://dipy.org; see also:
> https://github.com/nipy/dipy/pull/664). We think that this will be quite
> useful for scientists analyzing multi b-value HCP/HCP-like data.
>
> As part of the project, Rafael is writing some detailed usage examples. Our
> standard way of writing examples usually involves downloading some data from
> a URL (using 'data-fetchers'), reading the data from disk and operating on
> this data . For an example, see here:
> http://nipy.org/dipy/examples_built/reconst_dti.html#example-reconst-dti
>
> We are interested in writing an example of DKI analysis that uses the HCP
> data, or data acquired using the HCP protocol. However, as I understand it,
> for the bulk of the HCP data, we can't just download a diffusion data-set,
> and then serve it up on some URL, because this would violate the terms of
> use. My question is whether there is some specific HCP data-set that is
> released under a license that does allow that (e.g. a public dedication
> license)?
>
> Alternatively, does someone on this list have data that was acquired with
> the HCP protocol, under an IRB protocol that does allow distribution under a
> public dedication license (or some-such), and would be willing to share the
> data for this purpose?

Forgive my ignorance - but do I remember correctly that the HCP
license allows you to redistribute data, under the same license?

Then I guess, for testing, you could take some datasets and put them
in a repo, with the same license, and use this repo for testing?

I guess the problem is that you'd have to put big red warnings on the
repo so that people realized what the license terms were before
cloning the repo.

But like you, I'd love to hear of any more freely available data...

Cheers,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] CIFTI files for testing CIFTI support

2015-06-26 Thread Matthew Brett
On Thu, Jun 25, 2015 at 2:39 PM, Timothy Coalson  wrote:
> We made them with the purpose of not needing acceptance of the HCP data use
> terms, so we could put them on NITRC without needing a license acceptance,
> so I believe the intent was to make them able to be distributed freely with
> no terms.  We did not specifically discuss putting a permissive license on
> them, perhaps we should.

That would be very helpful.   Perhaps PDDL 1.0?  That is the easiest for us.

Best,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] CIFTI files for testing CIFTI support

2015-06-25 Thread Matthew Brett
Hi,

On Thu, Jun 25, 2015 at 1:22 PM, Timothy Coalson  wrote:
> We have some example files here that cover each mapping type at least once:
>
> http://www.nitrc.org/frs/?group_id=454
>
> If this is the pull request you are starting from, the cifti files from it
> appear to already be in it, search for "ones" and you will see 5 binary
> files:
>
> https://github.com/nipy/nibabel/pull/249/files
>
> The README from the zip file on NITRC might still be useful information
> about the example files, though.  If you are feeling ambitious, you can also
> use workbench to make sphere surfaces, draw your own ROIs, etc, and make
> cifti files to test specific cases without starting from scan data.

Thanks for this.  I might have missed it, but there doesn't seem to be
a license in that zip file or mentioned in the README.  Can we ship
these files under our BSD / PDDL license do you know?   For example, I
don't think we can do that under :
http://www.humanconnectome.org/data/data-use-terms/

Cheers,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] nibabel gifti files with workbench

2014-03-04 Thread Matthew Brett
Hi,

On Sun, Mar 2, 2014 at 1:13 PM, Timothy Coalson  wrote:
> From the .dtd on https://www.nitrc.org/projects/gifti/, I see this:
>
>   Endian   (BigEndian |
> LittleEndian)  #REQUIRED
>
> From GIFTI_Surface_Format.pdf on the same site (which also echos the above
> on page 7), it looks like line breaks are also implicitly forbidden:
>
> "The second encoding, Base64Binary (http://www.ietf.org/rfc/rfc3548.txt or
> http://email.about.com/cs/standards/a/base64_encoding.htm), encodes binary
> data into a sequence of ASCII characters."
>
> From http://www.ietf.org/rfc/rfc3548.txt:
>
>Implementations MUST NOT not add line feeds to base encoded data
>unless the specification referring to this document explicitly
>directs base encoders to add line feeds after a specific number of
>characters.
>
> I think nibabel needs to change how it writes gifti files.
>
> Tim

Thanks for looking this up.  I'll go check our implementation:

https://github.com/nipy/nibabel/issues/230

Cheers,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Fwd: Nifti2 extensions - are they converted correctly?

2013-08-19 Thread Matthew Brett
Hi,

On Mon, Aug 19, 2013 at 12:18 PM, Timothy Coalson  wrote:
> We appear to be padding to 8 bytes rather than 16, not sure how that
> happened.  It should not affect us to change that (workbench doesn't care
> what the alignment is on reading), so I will make it pad to a multiple of 16
> bytes.

Thanks for the reply and for the fix - that's very helpful,

Cheers,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Fwd: Nifti2 extensions - are they converted correctly?

2013-08-19 Thread Matthew Brett
Hi,

On Mon, Aug 19, 2013 at 8:13 AM, Jennifer Elam  wrote:
> Hi Matthew,
>
> The original NIFTI-1 file format was based on 16-bit signed integers and was
> limited to 32,767 in each dimension. The NIFTI-2 format
> (http://www.nitrc.org/forum/message.php?msg_id=3738)
> is based on 64-bit integers and can handle very large volumes and matrices.

Thanks - I know what the nifti2 format is, partly because (as I
mentioned further up the thread) I have written my own io routines for
it in Python.

My question was about your software's implementation of the standard.
 I think (from the quotes in my email) the standard requires the
recorded extension size and the vox_offset field to be integer
multiples of 16.  Do y'all agree with my interpretation of the
standard?

Thanks a lot,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Fwd: Nifti2 extensions - are they converted correctly?

2013-08-18 Thread Matthew Brett
Hi,

On Fri, Aug 2, 2013 at 5:19 PM, Timothy Coalson  wrote:
> There is a large amount of workbench-written cifti data available in the hcp
> data releases (http://www.humanconnectome.org/data/), which are nifti2
> files, though you have to agree to data use terms (don't try to identify
> subjects, attribution for research) to get them, and then do a little
> digging to find one (.dtseries.nii, or .dconn.nii if you want something
> large).
>
> I don't know of any "reference" nifti2 files made easily available for
> testing, though.

I think I've found another problem with the nifti2 files and extensions.

I downloaded some example files to look at, and our software gave an
error reading the extensions of the nifti2 files.

Looking into it, it seems that the extensions often have extension
length (esize) that is not divisible by 16, as specified in the
standard:

http://nifti.nimh.nih.gov/nifti-1/documentation/nifti1fields/nifti1fields_pages/extension.html/view?searchterm=extension

"esize is the number of bytes that form the extended header data
+ esize must be a positive integral multiple of 16 + this length
includes the 8 bytes of esize and ecode themselves"

Here is the first file that I tried:

hexdump -d -s 540 -n 16 100307/MNINonLinear/100307.aparc.164k_fs_LR.dlabel.nii
1   0   63144   00028   00032   0   15370   18755

Here the first 4 bytes are correct - 1 0 0 0.  63144 and 00028 are the
esize.  However:

In [3]: (63144 + 28 * 2 ** 16) / 16.
Out[3]: 118634.5

So the esize value is not divisible by 16.

I noticed the vox_offset in the main header was also often not
divisible by 16.  This is also recommended by the nifti1 standard:

http://nifti.nimh.nih.gov/pub/dist/src/niftilib/nifti1.h

"When the dataset is stored in one file, the first byte of image
   data is stored at byte location (int)vox_offset in this combined file.
   The minimum allowed value of vox_offset is 352; for compatibility with
   some software, vox_offset should be an integral multiple of 16."

By the way - it looks like the extension type code (ecode) is 32 - but
I can't see that documented here:

http://nifti.nimh.nih.gov/nifti-1/documentation/faq#Q21

Do you have any documentation on that?

Thanks a lot,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Nifti2 extensions - are they converted correctly?

2013-08-18 Thread Matthew Brett
Hi,

On Tue, Aug 6, 2013 at 4:02 AM, Robert Oostenveld
 wrote:
> Hi Matthew,
>
> Although I don't have reference data, I do have an alternative implementation 
> for reading/writing nifti-2 files, including files containing the cifti 
> header extension. Although my code is in MATLAB, it still might be useful for 
> your debugging efforts.
>
> The code is implemented as part of our FieldTrip toolbox, as that toolbox 
> will be used for the analysis of the MEG parts in the HCP. The nift-2 and 
> cifti implementation are not yet completely finished, but can already be 
> found on my github page on the bug2096-cifti branch (see 
> https://github.com/oostenveld/fieldtrip/tree/bug2096-cifti and 
> http://bugzilla.fcdonders.nl/show_bug.cgi?id=2096). Specifically you would 
> want to go to the fieldtrip/fileio/private directory, where you see
>
> mac001> cd private/
> mac001> ls *nifti2* *cifti*
> read_cifti.mread_nifti2_hdr.m   write_cifti.m   
> write_nifti2_hdr.m
>
> These low-level functions will in the near future be wrapped in 
> fieldtrip/fileio/ft_read_mri.m and similar higher-level read/write functions 
> functions.
>
> Another relevant file is the test script fieldtrip/test/test_bug2096.m 
> (visible here: 
> https://github.com/oostenveld/fieldtrip/blob/bug2096-cifti/test/test_bug2096.m)
>
> best regards,
> Robert
>
> PS don't hesitate to contact me outside the list to discuss further details, 
> that might also help me to finalize my MATLAB implementation.

Sorry to be slow to reply - I would have saved quite a bit of time if
I'd read your email properly :)

I see that the '32' type extension is 'cifti' and that led me to:

http://www.nitrc.org/plugins/mwiki/index.php/cifti:ConnectivityMatrixFileFormats#CIFTI_Connectivity_Matrix_File_Formats

Thanks - it was kind of you to point me to your code, and put it up.

By the way, my struggles are currently at:

https://github.com/matthew-brett/nibabel/tree/nifti12-refactor

Cheers,

Matthew

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Nifti2 extensions - are they converted correctly?

2013-08-03 Thread Matthew Brett
Hi,

On Fri, Aug 2, 2013 at 3:15 PM, Timothy Coalson  wrote:
> You are correct, that is a bug in our code that we didn't notice, it will be
> fixed in the next release (for posterity, this bug affects 0.82 and
> earlier).  I don't think we ever use -nifti-convert, which is about the only
> place that writes a nifti2 file that is broken that way (the code to write
> cifti files does not cause this problem).  It might happen if you write a
> volume file with a dimension over 32767, I think that is the only other way
> to hit that bug.

Thanks - that's helpful.

As a matter of interest - do you know of any public nifti2 datasets
that we can test against?

Best,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Nifti2 extensions - are they converted correctly?

2013-08-02 Thread Matthew Brett
Hi,

I'm just finishing up a Nifti2 read / write module for nibabel:
http://nipy.org/nibabel

To test it, I tried to read an image converted from Nifti1 using
Connectome Workbench.

The original image is here:

https://github.com/nipy/nibabel/raw/master/nibabel/tests/data/example4d.nii.gz

The converted image is here:

https://www.dropbox.com/s/yj6pw7umpz26im5/example4d_ni2.nii.gz

My reader fails when trying to read the Nifti extension from the
converted image.

Looking at the nifti1 standard;

http://www.nitrc.org/docman/view.php/26/64/nifti1.h

I think that nifti1 extensions look like this:

[348 bytes of header data] ending at 348
[4 bytes of 'extension'] ending at 352, content [1, 0, 0, 0] if there
are extensions
[8 bytes of 2 ints, esize, ecode] ending at 360 for first extension

Accordingly, here's a decimal hexdump of bytes 348 - 360 of the nifti1 file:

1   0   00032   0   6   0   30821   25460

So [1, 0, 0, 0], [32, 6] meaning: yes there are extensions, first
extension is length 32, type 6.

By analogy, I think that a nifti2 header should have:

[540 bytes of header data] ending at 540
[4 bytes of 'extension'] ending at 544, content [1, 0, 0, 0] if there
are extensions
[8 bytes of 2 ints, esize, ecode] ending at 552 for first extension

Here's a decimal hexdump of bytes 540 - 556 of the converted nifti2 file:

2   0   1   0   00032   0   6   0

It looks like the [extension] and [esize, ecode] are offset by 4
bytes, and start at 544.  Is that right?  Do you agree that they
should start at 540 instead?

Thanks a lot,

Matthew
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users