Skip to content

Commit

Permalink
Release 1.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
HippocampusGirl committed Apr 19, 2021
2 parents 546e2d9 + 427965a commit b8015dc
Show file tree
Hide file tree
Showing 10 changed files with 168 additions and 40 deletions.
68 changes: 66 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,69 @@
# Changelog

## 1.1.0 (April 18th 2021)

With many thanks to @jstaph for contributions

### New features and enhancements

- Create high-performance computing cluster submission scripts for Torque/PBS
and SGE cluster as well (#71)
- Calculate additional statistics such as heterogeneity
(<https://doi.org/fzx69f>) and a test that data is
missing-completely-at-random via logistic regression (#67)
- Always enable ICA-AROMA even when its outputs are not required for feature
extraction so that its report image is always available for quality assessment
(#75)
- Support loading presets or plugins that may make it easier to do harmonized
analyses across many sites (#8)
- Support adding derivatives of the HRF to task-based GLM design matrices
- Support detecting the amount of available memory when running as a cluster
job, or when running as a container with a memory limit such as when using
Docker on Mac

### Maintenance

- Add type hints to code. This allows a type checker like `pyright` to suggest
possible error sources ahead of time, making programming more efficient
- Add `openpyxl` and `xlsxwriter` dependencies to support reading/writing Excel
XLSX files
- Update `numpy`, `scipy` and `nilearn` versions
- Add additional automated tests

### Bug fixes

- Fix importing slice timing information from a file after going back to the
prompt via undo (#55)
- Fix a warning when loading task event timings from a MAT-file.
NiftiheaderLoader tried to load metadata for it like it would for a NIfTI file
(#56)
- Fix `numpy` array broadcasting error when loading data from 3D NIfTI files
that have been somehow marked as being four-dimensional
- Fix misunderstanding of the output value `resels` of FSL's `smoothest`
command. The value refers to the size of a resel, not the number of them in
the image. The helper function `_critical_z` now taked this into account now.
(nipy/nipype#3316)
- Fix naming of output files in `derivatives/halfpipe` and `grouplevel` folder
so that capitalization is consistent with original IDs and names (#57)
- Fix the summary display after `BIDS` import to show the number of "subjects"
and not the number of "subs"
- Fix getting the required metadata fields for an image type by implementing a
helper function
- Fix outputting source files for the quality check web app (#62)
- Fix assigning field maps to specific functional images, which is done by a
mapping between field map taks and functional image tags. The mapping is
automatically inferred for BIDS datasets and manually specified otherwise
(#66)
- Force re-calculation of `nipype` workflows after `HALFpipe` update so that
changes from the new version are applied in existing working directories as
well
- Do not fail task-based feature extraction if no events are available for a
particular condition for a particular subject (#58)
- Force using a recent version of the `indexed_gzip` dependency to avoid error
(#85)
- Improve loading delimited data in `loadspreadsheet` function
- Fix slice timing calculation in user interface

## 1.0.1 (January 27th 2021)

### Maintenance
Expand Down Expand Up @@ -139,12 +203,12 @@
both images have the same number of volumes, the one with the alphabetically
last file name will be used.

## Maintenance
### Maintenance

- Apply pylint code style rules.
- Refactor automated tests to use pytest fixtures.

## Bug fixes
### Bug fixes

- Log all warning messages but reduce the severity level of warnings that are
known to be benign.
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@ approximately 5 gigabytes of storage.

| Container platform | Installation |
| ------------------ | ------------------------------------------------------------------------------------------------------------------ |
| Singularity | `singularity pull docker://halfpipe/halfpipe:1.0.1` or `singularity pull docker://ghcr.io/halfpipe/halfpipe:1.0.1` |
| Docker | `docker pull halfpipe/halfpipe:1.0.1` |
| Singularity | `singularity pull docker://halfpipe/halfpipe:1.1.0` or `singularity pull docker://ghcr.io/halfpipe/halfpipe:1.1.0` |
| Docker | `docker pull halfpipe/halfpipe:1.1.0` |

`Singularity` version `3.x` creates a container image file called
`HALFpipe_{version}.sif` in the directory where you run the `pull` command. For
Expand Down Expand Up @@ -121,7 +121,7 @@ downloaded your container.

| Container platform | Command |
| ------------------ | ----------------------------------------------------------------------- |
| Singularity | `singularity run --no-home --cleanenv --bind /:/ext halfpipe_1.0.1.sif` |
| Singularity | `singularity run --no-home --cleanenv --bind /:/ext halfpipe_1.1.0.sif` |
| Docker | `docker run --interactive --tty --volume /:/ext halfpipe/halfpipe` |

You should now see the user interface.
Expand Down
20 changes: 9 additions & 11 deletions halfpipe/interface/resultdict/extract.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,22 +39,20 @@ def _list_outputs(self):

resultdict_schema = ResultdictSchema()
resultdict = resultdict_schema.load(self.inputs.indict)
assert isinstance(resultdict, dict)

outdict = dict()

def _extract(keys):
for inkey in keys:
for f, v in resultdict.items():
if inkey in v:
outdict[key] = v[inkey]
del v[inkey]
return

for key in self._keys:
keys = [key]
key_and_aliases = [key]
if key in self._aliases:
keys.extend(self._aliases[key])
_extract(keys)
key_and_aliases.extend(self._aliases[key])
for key_or_alias in key_and_aliases:
for v in resultdict.values():
if key_or_alias in v:
outdict[key] = v[key_or_alias]
del v[key_or_alias]
return

for key in self._keys:
if key in outdict:
Expand Down
7 changes: 5 additions & 2 deletions halfpipe/io/metadata/slicetiming.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,9 @@ def slice_timing_str(slice_times):


def str_slice_timing(order_str, n_slices, slice_duration):
orders = _get_slice_orders(n_slices)
order = _get_slice_orders(n_slices)[order_str]

return list(np.array(orders[order_str], dtype=np.float64) * slice_duration)
timings = np.zeros((n_slices,))
timings[order] = np.arange(n_slices) * slice_duration

return list(timings)
56 changes: 56 additions & 0 deletions halfpipe/io/metadata/tests/test_slicetiming.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# -*- coding: utf-8 -*-
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
# vi: set ft=python sts=4 ts=4 sw=4 et:

from math import isclose

from ..slicetiming import str_slice_timing


def test_str_slice_timing():
# based on
# https://openneuro.org/datasets/ds000117/versions/1.0.4/file-display/task-facerecognition_bold.json

order_str = "alternating increasing odd first"
n_slices = 33
slice_duration = 2.0 / n_slices

timings = str_slice_timing(order_str, n_slices, slice_duration)

reference = [
0,
1.0325,
0.06,
1.095,
0.12,
1.155,
0.1825,
1.215,
0.2425,
1.2775,
0.3025,
1.3375,
0.365,
1.3975,
0.425,
1.46,
0.485,
1.52,
0.5475,
1.58,
0.6075,
1.6425,
0.6675,
1.7025,
0.73,
1.7625,
0.79,
1.825,
0.85,
1.885,
0.9125,
1.945,
0.9725,
]

assert all(isclose(a, b, abs_tol=1e-2) for a, b in zip(timings, reference))
10 changes: 9 additions & 1 deletion halfpipe/io/parse/spreadsheet.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,11 @@ def loadspreadsheet(file_name, extension=None, **kwargs) -> pd.DataFrame:
with open(file_name, "rb") as file_pointer:
file_bytes: bytes = file_pointer.read()

if extension in [".xls", ".xlsx"]:
if len(file_bytes) == 0:
# empty file means empty data frame
return pd.DataFrame()

elif extension in [".xls", ".xlsx"]:
file_io = io.BytesIO(file_bytes)
return pd.read_excel(file_io, **kwargs)

Expand All @@ -48,6 +52,10 @@ def loadspreadsheet(file_name, extension=None, **kwargs) -> pd.DataFrame:

else:
encoding = chardet.detect(file_bytes)["encoding"]

if encoding is None:
encoding = "utf8"

kwargs["encoding"] = encoding

file_str = file_bytes.decode(encoding)
Expand Down
3 changes: 3 additions & 0 deletions halfpipe/plugins/multiproc.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,9 @@ def __init__(self, plugin_args=None):
if self._keep != "all":
self._rt = PathReferenceTracer()

def _postrun_check(self):
self.pool.shutdown(wait=False) # do not block

def _submit_job(self, node, updatehash=False):
self._taskid += 1

Expand Down
14 changes: 3 additions & 11 deletions halfpipe/ui/metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@
)

import numpy as np
import pandas as pd
from inflection import humanize
from marshmallow import fields

from .step import Step
from ..io.parse import loadspreadsheet
from ..io.metadata import direction_code_str, slice_timing_str
from ..model import space_codes, slice_order_strs

Expand Down Expand Up @@ -93,16 +93,8 @@ def next(self, ctx):
if self.result is not None:
filepath = self.result

spreadsheet = pd.read_table(
filepath,
sep="\s+",
header=None,
names=["slice_times"],
index_col=False,
usecols=[0],
dtype=float,
)
valuearray = np.ravel(spreadsheet.slice_times.values).astype(np.float64)
spreadsheet = loadspreadsheet(filepath)
valuearray = spreadsheet.values.ravel().astype(np.float64)
valuelist: List = list(valuearray.tolist())

value = self.field.deserialize(valuelist)
Expand Down
23 changes: 13 additions & 10 deletions halfpipe/watchdog.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,19 @@ def loop(interval):
while True:
time.sleep(interval)

stacktrace = "".join(format_thread(mainthread))

rows = summary.summarize(muppy.get_objects())
memtrace = "\n".join(summary.format_(rows))

logger.info(
"Watchdog traceback:\n"
f"{stacktrace}\n"
f"{memtrace}"
)
try:
stacktrace = "".join(format_thread(mainthread))

rows = summary.summarize(muppy.get_objects())
memtrace = "\n".join(summary.format_(rows))

logger.info(
"Watchdog traceback:\n"
f"{stacktrace}\n"
f"{memtrace}"
)
except Exception as e:
logger.error("Error in watchdog", exc_info=e)

thread = Thread(target=loop, args=(interval,), daemon=True, name="watchdog")
thread.start()
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ scikit-learn >= 0.24.0
nilearn >= 0.7.1
odfpy
openpyxl
xlsxwriter
nibabel >= 3.2.1
indexed-gzip >= 1.5.3
nipype == 1.6.0
Expand Down

0 comments on commit b8015dc

Please sign in to comment.