You can then use the publication release ID to query all data-objects that belongs to this release (and filter by subject) and download each data-objects.
123456
for id in$(bl data query --limit 200\
--pub 5bab993aa918ae0027024192 \
--subject 0001\
--json | jq -r ".[]._id");do
bl data download $iddone
So far we have described how you can upload / download data from brainlife.io which is stored in Brainlife's project archive. On brainlife.io, you run
Apps inside "process" and data generated there can be "archived" to the brainlife.io's project archive. Sometimes, you want to download files generated
inside a process, which may contain extra files that are not archived in project archive. You might also want to access process data as you can specify
which files / directory to download - rather than the entire .tar.gz content from project archive.
The following python script demonstrates how you can query for existing processes / tasks, and download content stored under each tasks.
#!/usr/bin/python3importrequestsimportosimportjson# load the jwt token (run bl login to create this file)jwt_file=open(os.environ['HOME']+'/.config/brainlife.io/.jwt',mode='r')jwt=jwt_file.read()# query datasets recordsfind={'_group_id':'851',#see project detail page 'service':'brainlife/app-freesurfer',#'service_branch': '0.0.5','status':'finished'}params={'limit':500,'select':'config._inputs.meta',# for subject id'find':json.JSONEncoder().encode(find)}res=requests.get('https://brainlife.io/api/amaretti/task',params=params,headers={'Authorization':'Bearer '+jwt})ifres.status_code!=200:raiseException("failed to download datasets list:"+res.status_code)# loop over each task and download "output/stats" directories.tasks=res.json()["tasks"]fortaskintasks:taskid=task["_id"]subject=task["config"]["_inputs"][0]["meta"]["subject"]print(taskid,subject)url='https://brainlife.io/api/amaretti/task/download/'+taskid+'/freesurfer/output/stats?at='+jwtres=requests.get(url,allow_redirects=True)open(subject+'.tar.gz','wb').write(res.content)