- 26 Sep 2024
- PDF
Kernel API
- Updated on 26 Sep 2024
- PDF
The Kernel API allows you to automate repeated tasks programmatically.
To get started, for each study you want to access, request an API key from support by sending the Study ID. To find the Study ID, navigate to the study in Portal and look for the Study ID in the URL after /studies/
.
Example (Study ID in bold):https://portal.kernel.com/organizations/27e2118c-e13a-424e-aad8-415db5bd3245/studies/27e2118c-e13a-424e-aad8-415db5bd3245/datasets
External/Customer facing API
Routes for:
- all dataset IDs + metadata in a study
- run a pipeline, given a dataset ID and pipeline name
- status for most recent pipeline run and signed urls if present, given dataset ID and pipeline name
NOTE:
Each route requires an API key in the header, such as:
headers={"Authorization": f"{api_key}"}
Contact Kernel Support to get your API keyCommon ways to use Kernel API
List all dataset IDs and metadata in a study:
list_datasets(study_id : str)--> Dict[str, List[dict]]
/api/vi/study/{study_id}/datasets
- Parameters:
- study_id (str) - study id
- Returns:
- all dataset IDs and metadata in a study
- Return Type:
- dict[str, List[dict]]
{
"datasets": [
{
"id": "00000000000000000000000000000000",
"meta": {}, # possible keys include "description", "name", "experiment"
"participant": {
"id": "00000000000000000000000000000000"
"participant_id": "00000000000000000000000000000000"
"created_at": 0.0, # seconds since epoch
"active": True, # participant active status
"pending": False, # participant pending status
"status": "active" # participant status in the study,
},
"created_date": 0.0, # seconds since epoch
"started_at": 0.0, # seconds since epoch
"stopped_at": 0.0, # seconds since epoch
}
]
}
Examples
>>> response = requests.get("https://api.kernel.com/api/v1/study/000000000000000)
>>> assert response.status_code == 200
>>> datasets = response.json()["datasets"]
>>> dataset_ids = [dataset["id"] for dataset in datasets]
Get the status of the most recent pipeline run for a dataset and asset URLs if available:
pipeline_status(study_id : str, dataset_id: str, pipeline_name: str)
/api/vi/study/{study_id}/datasets/{dataset_id}/pipeline/{pipeline_name]/status
- Parameters:
- study_id (str) - study id
- dataset_id (str) - dataset id
- pipeline_name (str) - pipeline name
- Returns:
- status of the most recent pipeline run and asset URLs if available
- Return Type:
- dict
{
"job_id": "00000000000000000000000000000000",
"status": "SUCCEEDED",
"signed_urls": {
"urls": {
"filename1": "https://someurl"
}, "sizes": {
"filename1": 123.1
},
"batch_job_id": "00000000000000000000000000000000"
"execution_id": "00000000000000000000000000000000"
}
}
Examples
>>> response = requests.get("https://api.kernel.com/api/v1/study/000000000000000")
>>> assert response.status_code == 200
>>> status = response.json()
>>> assert status["status"] == "SUCCEEDED"
Run a pipeline for a dataset:
run_pipeline(study_id : str, dataset_id: str, pipeline_name: str) --> Dict[str, Any]
/api/vi/study/{study_id}/datasets/{dataset_id}/pipeline/{pipeline_name]
- Parameters:
- study_id (str) - study id
- dataset_id (str) - dataset id
- pipeline_name (str) - pipeline name
- Returns:
- job ID if the pipeline run
- Return Type:
- Dict
Other available pipelines:
# Available pipelines are:
"analysis_eeg"
"analysis_nirs_epoched"
"analysis_nirs_glm"
"analysis_task"
"qc_eeg"
"qc_nirs_basic"
"qc_nirs"
"qc_syncbox"
"reconstruction"
"pipeline_snirf_gated"
"snirf_hb_moments"
"snirf_moments"
Response json structure:
{
"job_id": "00000000000000000000000000000000"
}
Examples
>>> response = requests.post("https://api.kernel.com/api/v1/study/00000000000000")
>>> assert response.status_code == 200
>>> job_id = response.json()["job_id"]
.