Kernel Unity Tasks overview
  • 01 Aug 2024
  • PDF

Kernel Unity Tasks overview

  • PDF

Article summary

Tasks are predefined stimuli or instructions that you can present to a participant during a recording. By using precise, repeatable tasks, you can ensure data are appropriately comparable across multiple datasets (under different circumstances, for example) and across multiple participants.

In order for the timing of Task stimuli to be synchronized and recorded alongside your Flow2 data during a data recording, Tasks must be created using a specific framework (and run on the same computer being used to acquire data). You can create your own Tasks using the Kernel Tasks SDK or you can use one of the Kernel-built Unity Tasks.

The Tasks in the menu are described below.

NOTE:
The provided tasks may be presented by configuring your data acquisition computer to have a participant-facing monitor in addition to the main monitor. This can be helpful if you
would like to be able to view the Portal Flow UI on the main monitor as the participant completes the task.
Image: alert icon.
When using the pre-built Unity tasks, it is strongly recommended that you use a Chrome internet browser. Other browsers are not guaranteed to work.

General Physiology tasks:

  • Breath hold: Participant breathes in sync with stimuli presented on the screen. We expect to see changes in oxyhemoglobin and deoxyhemoglobin that are similar across the entire head, and synched to the breathing and breath holding. Learn more about Breath Hold.

Sensory tasks:

  • Finger tapping: Participant taps their thumbs to their fingertips as prompted by onscreen stimuli. We expect to see activation in the motor cortex. Here's a study using a similar task. 
  • Auditory: Participant listens to an interleaved series of white noise, clips from Ted talks, and silence. We expect to see activation in the auditory cortex. Here's a study measuring auditory responses from fNIRS.

Cognitive tasks:

  • Stroop: Participants are presented with color words (i.e. red, yellow, or blue). They are asked to read the word and to quickly report the color of the word's font (i.e. red, yellow, or blue). This task targets attention and emotion processing systems. We expect to see right prefrontal activation. Learn more about stroop.
  • Go/No Go: A pattern of images appear onscreen sequentially. Participant responds only when certain images appear. This task targets inhibitory control systems. We expect to see prefrontal activation. Here's a study using a similar task. 
  • N-Back: A pattern of numbered playing cards appears onscreen. Participant responds if the current card matches one that appeared a designated number of symbols prior. This task targets working memory systems. We expect to see prefrontal activation.Learn more about n-Back. Here's a study using a similar task.
  • Verbal Fluency (VFT): Participants are given a letter (or semantic word category). They are asked to produce as many words as they can that start with the given letter (or belong to the given category) over a 30-60 second period.  This process is repeated for different letters and categories. Here’s a study using a similar task.
  • Resting State: Participants passively watch a 7 minute abstract video, called Inscapes, that is designed to measure brain activity during rest. This task targets the default mode network. Learn more about the origin of Inscapes.
  • Emotional Stroop: Participants are presented with words from different emotional categories (e.g. sad, happy, fear). They are asked to read the word and to quickly report the color of the text (i.e. red, yellow, or blue). This task targets attention and emotion processing systems.We expect to see prefrontal activation.Learn more about stroop. Here are some studies using an emotional version of the stroop task.
  • Wisconsin Card Sorting: Participants are presented with a deck of cards. Each card-face varies in terms of number of objects, object shape, and object color. They are instructed to sort the cards based on an unknown rule, which will change throughout the task. The participants must discern the correct sorting criterion through trial and error. This task uses executive functions. We expect to see prefrontal activation. Learn more about Wisconsin Card Sorting.Here are some studies using a similar task.
  • Multi-Source Interference Task (MSIT): Participants are presented with a rapid stream of sets of 3 numbers, two of which will match and one that will be different. The participant needs to ignore the position of the odd-ball number and report its value. Learn more about the MSIT task and the studies that use this paradigm here.

Other:

  • Generic: In this task, participants are exposed to a predefined sequence of commands presented on the screen. You can input a JSON formatted as an array of blocks, where each block contains specific instructions. This task can be used to create a custom block-design task with experimental and rest periods and will send timestamped events based on the prompts given. Note that the task does not accept user feedback. 


To launch a Task from Portal Flow UI:

  1. In the Data Acquisition Menu, click Launch Tasks.
    The Unity Task menu opens in a new tab. 
  2. Click the Task you would like to run. This will open a Task-specific menu (example below).
  • Click the Instructions button to view instructions for the Task.
  • Click the Practice button to start a short sample run-through of the Task. 
    • Practice versions of the Task may give participants additional feedback to help them learn how to do the Task.
  • Click the Start button to begin the formal Task.
  • Click the Settings icon to edit the parameters of the Task. Every Task has slightly different parameters. Settings may be modified if desired or they can be left with the default settings. In the example Task below (Go/No-Go), there are settings for:
    • Start Rest: baseline rest duration (seconds)
    • Average Fixation: length of rest between blocks (seconds)
    • Stim Duration: length of each go/no-go stimulus being displayed (seconds)
    • Trial Duration: total trial duration (seconds), where a Trial = Stim Duration + inter-trial interval
    • Number of Blocks: number of blocks, where one block is a collection of 24 trials. A block consists either of all go-trials, or a mix or go and no-go trials.
    • Session Number: refers to versions of the task, where each version has a different ordering of stimuli within a block
    • Play Sound: adds sound feedback for gamification
    • Show Score: adds a dynamic score visible during the taskfor gamification


Image: Lightbulb icon.
While a Task is running you can press Q or Esc to return to the Task-specific menu.
Image: Lightbulb icon.

While running the Task, be sure to move the mouse pointer away from any onscreen stimuli to avoid distracting the participant. Also, be careful not to click outside the task window while a Task is running or keyboard inputs will not be logged.

You can also run Tasks from a separate, external device, connected to the data acquisition computer via the Sync Accessory Box. To learn more see Using the Sync Accessory Box.

Image: Lightbulb icon.
Be sure to select a Task name associated with the Task in the Data Acquisition section of the Portal Flow UI, to ensure the metadata in your dataset includes an identification of the Task performed by the participant. To learn more, see Adding Task names.