Data Export Pipelines (SNIRF and Reconstruction)
  • 26 Aug 2024
  • PDF

Data Export Pipelines (SNIRF and Reconstruction)

  • PDF

Article summary

Data export pipelines are one type of download available through the Kernel Portal at this time. These are accessed in the Pipelines tab for each dataset (see Downloading datasets). Data export pipelines allow you to export the TD-NIRS data into two different formats:

  • The SNIRF format, for channel-space data
  • The NIfTi format, for voxel-space data
NOTE:
Kernel is dedicated to continually enhancing our signal processing pipelines to maximize neural signal extraction while effectively filtering out various noise sources from the rich data streamed by the Flow device. The preprocessing of fNIRS data, particularly TD-fNIRS data, is an evolving area of research both within the academic community and at Kernel. This document provides a description of the current processing methods.


Any updates to the pipelines will be documented in the Release Notes. SNIRF and NIfTI exports are tagged with the version of the Portal pipeline that produced them, as YYYY.MM.DD. This tag can be found under nirs/metaDataTags/KernelPortalVersion in the SNIRF files; and in the “descrip” field of the NIfTI header. 


SNIRF Pipelines

There are three pipelines that output SNIRF files: 

  • SNIRF: Moments pipeline
  • SNIRF: Hb Moments pipeline
  • SNIRF: Gates

SNIRF: Moments Pipeline

The time-of-flight data recorded by Kernel Flow is featurized into its first moments: intensity (as in continuous-wave NIRS), mean time of flight (first moment) and variance of the time of flight (second central moment). This is minimally preprocessed data, which gives you flexibility as to what signal processing steps you wish to apply in your analyses.

Below are the processing steps applied to the raw data before it is written into the SNIRF file.

1. Data Trimming

Motion artifacts frequently occur at the beginning and end of recording sessions. These artifacts are typically introduced at the beginning when the experimenter starts the recording before the participant is fully settled, and at the end when the participant starts to relax, knowing the session is over. To mitigate these artifacts, the first and last 5 seconds of each recording are removed. 

If task events are present within these initial or final 5-second intervals, the trimming is adjusted to ensure that at least 1 second of data is retained before the first event and at least 1 second of data is retained after the last event.

2. Remove bad channels

In a 40-module helmet configuration, the data includes over 3500 channels formed between the 120 sources and all detectors within a 60mm range of each source. Depending on participants, between 500 and 2500 of the channels will have a usable signal. We remove channels that do not have enough signal (not enough photons in the peak) or have an oddly shaped time of flight (too wide, too shifted, or otherwise not matching a typical histogram shape).

Note that the source-detector distances for between-plate channels do not currently account for variability in head sizes, and are based on a standard placement for an average-sized head. 

3. Correct histogram noise floor

The baseline of the histograms are fitted and removed.

4. Compute moments from histograms of photon counts (distribution of time of flight, DTOF)

For each histogram of photon arrival times, the below are calculated:

  1. Total number of photons
  2. The mean time of flight (first moment)
  3. Variance of the time of flight (second central moment)

These quantities have been shown to be sensitive to absorption changes and have different depth sensitivity profiles. 

The Near-Infrared Spectroscopy (NIRS) data you download from the SNIRF: Moments pipeline are the results of the processing steps outlined above.

SNIRF: Hb Moments Pipeline

The SNIRF: Hb Moments pipeline builds upon the SNIRF: Moments pipeline to produce data that is “statistical analysis ready.” This pipeline translates the moments of the time of flight distributions recorded at two wavelengths to estimates of concentration changes for HbO (oxyhemoglobin) and HbR (deoxyhemoglobin) chromophores. Additionally, the pipeline includes motion correction and global signal regression to remove global artifacts and superficial physiological signals. Finally, the pipeline performs curve fitting on the longest within-module channels (26.5mm source-detector distance) to yield an absolute estimate of the concentrations of HbO and HbR per channel (median across session), which appears in the dataOffset field.

The processing steps below are continued from the steps in the SNIRF: Moments pipeline.

5. Convert Moments changes to [HbO] and [HbR] Concentrations changes:

We estimate changes in HbO and HbR concentrations corresponding to observed moment changes by solving a linear system. This process leverages the sensitivities for the three moments, derived from a two-layer finite element method (FEM) slab model (with a first layer thickness of 12 mm) and the Modified Beer-Lambert Law (MBLL), which incorporates tabulated molar extinction coefficients (source).

6. Motion Artifact Correction:

This step implements the Temporal Derivatives Distribution Repair (TDDR) algorithm, as introduced by Fishburn et al. (2019), to effectively remove baseline shifts and spike artifacts caused by motion.

We then apply gradient standard deviation detection and cubic spline interpolation (Scholkmann et al., 2010) to remove any remaining spike artifacts in the data.

7. Global Signal Regression:

This step computes the mean signal across all short channels (8mm, intra-module channels) and regresses this signal out of the data from all channels. This effectively removes global artifacts, particularly superficial physiological artifacts.

8. Curve fitting for absolute optical properties:

This step takes as input the baseline corrected histograms (step 3 from the SNIRF: Moments pipeline). 

The DTOF results from convolving the time-resolved TPSF with the IRF. Utilizing Flow2’s online IRF measurements, we employ a curve fitting technique to extract the absolute optical properties of the tissue beneath. Generating candidate TPSFs through an analytical solution of the diffusion equation for a homogeneous semi-infinite medium, we convolve these with the known IRF and compare them with the recorded DTOF. The search for optical properties is carried out using the Levenberg-Marquardt algorithm, focusing on fitting within the range spanning from 80% of the peak on the rising edge to 0.1% of the peak on the falling edge, with a refractive index set to 1.4. These absorption coefficient estimates are then converted to HbO and HbR concentrations. We run this algorithm for 100 evenly spaced, motion-free samples throughout the session, for all available long within-module channels, and report the median value across these 100 measurements.

SNIRF: Gated Pipeline

The SNIRF: Gated Pipeline is intended for advanced users who prefer analyzing time-of-flight distributions directly, rather than relying solely on their moments. TD-fNIRS is typically analyzed using one of three approaches: moments, time gates, or curve fitting (Lange and Tachtsidis, 2019). The latter two approaches can be applied to the data provided in the SNIRF: Gated outputs.

This pipeline follows the same initial three steps as the SNIRF: Moments pipeline, and then proceeds as follows:

4. FFT-based Deconvolution 

We apply FFT-based deconvolution to remove each source’s instrument response function (IRF) from the measured distributions of times of flight (DTOFs), resulting in time point spread functions (TPSF).

NOTE:
Deconvolution performs optimally for longer channels. Specifically, we've observed strong results when using within-module channels with the longest source-detector separation (26.5 mm).


Reconstruction Pipeline

The Reconstruction pipeline processes raw data to generate NIfTI files, providing separate outputs for HbO (oxyhemoglobin) and HbR (deoxyhemoglobin) concentrations. This pipeline is particularly beneficial for fMRI researchers familiar with NIfTI images, offering a seamless transition to analyzing data from the Flow helmet.

Initially, the raw data undergoes steps 1-4 as previously outlined in Snirf: Moments, which includes data trimming, removing bad channels, and correcting the histogram floor. 

The reconstruction algorithm then infers the concentrations of HbO and HbR in the tissue for every second of the recording. This algorithm maps the measured data to voxel space using a model-based approach, utilizing a regularized inverse model for time-resolved data. The numerical forward model, based on the diffusion approximation of photon propagation in tissue, is provided by the open-source toolbox NIRFAST. For all reconstructions, we currently use a head mesh based on the ICBM 2009b Nonlinear Asymmetric atlas.

The images are reconstructed in a voxelized basis with 4mm isotropic voxels, covering the entire space monitored by Flow modules.