- 02 Apr 2025
- PDF
Kernel API
- Updated on 02 Apr 2025
- PDF
The Kernel API allows you to automate repeated tasks programmatically.
To get started, have your Organization Owner generate an API key on the Organization Settings page.
External/Customer facing API
Routes for:
- All dataset IDs + metadata in a study
- Run a pipeline, given a dataset ID and pipeline name
- Status for most recent pipeline run and signed urls if present, given dataset ID and pipeline name
NOTE:
Each route requires an API key in the header, such as:
headers={"Authorization": f"{api_key}"}
An Organization Owner can generate an API key on the Organization Settings page.Common ways to use Kernel API
List all dataset IDs and metadata in a study:
list_datasets(study_id : str)--> Dict[str, List[dict]]
/api/vi/study/{study_id}/datasets
- Parameters:
- study_id (str) - study id
- Returns:
- All dataset IDs and metadata in a study
- Return Type:
- Dict[str, List[dict]]
{
"datasets": [
{
"id": "00000000000000000000000000000000",
"meta": {}, # possible keys include "description", "name", "experiment"
"participant": {
"id": "00000000000000000000000000000000"
"participant_id": "00000000000000000000000000000000"
"created_at": 0.0, # seconds since epoch
"active": True, # participant active status
"pending": False, # participant pending status
"status": "active" # participant status in the study,
},
"created_date": 0.0, # seconds since epoch
"started_at": 0.0, # seconds since epoch
"stopped_at": 0.0, # seconds since epoch
}
]
}
Examples
>>> response = requests.get("https://api.kernel.com/api/v1/study/000000000000000")
>>> assert response.status_code == 200
>>> datasets = response.json()["datasets"]
>>> dataset_ids = [dataset["id"] for dataset in datasets]
Available pipelines:
# Available pipelines are:
"analysis_eeg"
"analysis_nirs_epoched"
"analysis_nirs_glm"
"analysis_task"
"qc_eeg"
"qc_nirs_basic"
"qc_nirs"
"qc_syncbox"
"reconstruction"
"pipeline_snirf_gated"
"snirf_hb_moments"
"snirf_moments"
Get the status of the most recent pipeline run for a dataset and asset URLs if available:
pipeline_status(study_id : str, dataset_id: str, pipeline_name: str)
/api/vi/study/{study_id}/dataset/{dataset_id}/pipeline/{pipeline_name]/status
- Parameters:
- study_id (str) - study id
- dataset_id (str) - dataset id
- pipeline_name (str) - pipeline name
- Returns:
- Status of the most recent pipeline run and asset URLs if available
- Return Type:
- dict
{
"job_id": "00000000000000000000000000000000",
"status": "SUCCEEDED",
"signed_urls": {
"urls": {
"filename1": "https://someurl"
}, "sizes": {
"filename1": 123.1
},
"batch_job_id": "00000000000000000000000000000000"
"execution_id": "00000000000000000000000000000000"
}
}
Examples
>>> response = requests.get("https://api.kernel.com/api/v1/study/000000000000000/dataset/000000000000000/pipeline/name/status")
>>> assert response.status_code == 200
>>> status = response.json()
>>> assert status["status"] == "SUCCEEDED"
Run a pipeline for a dataset:
run_pipeline(study_id : str, dataset_id: str, pipeline_name: str) --> Dict[str, Any]
/api/vi/study/{study_id}/dataset/{dataset_id}/pipeline/{pipeline_name}
- Parameters:
- study_id (str) - study id
- dataset_id (str) - dataset id
- pipeline_name (str) - pipeline name
- Returns:
- Job ID if the pipeline run
- Return Type:
- Dict
Response json structure:
{
"job_id": "00000000000000000000000000000000"
}
Examples
>>> response = requests.post("https://api.kernel.com/api/v1/study/00000000000000/dataset/00000000000000/pipeline/name")
>>> assert response.status_code == 200
>>> job_id = response.json()["job_id"]
.