Deep Learning Library for Single Cell Analysis
OTHER License
Bot releases are visible (Hide)
fetch_data
when a cachedir
is provided but the parent directory has not yet been created.Published by github-actions[bot] about 1 year ago
This PR adds the SpotNet
dataset to the list of new authenticated datasets, following the established pattern. It also adds a SpotNetExampleData
class, which loads the example data for the Polaris example notebooks. This class deviates slightly from the existing pattern for loading data in order to accommodate for the different file types that need to be loaded. The SpotNet dataset has also been added to the datasets gallery in the documentation.
Published by release-drafter[bot] over 1 year ago
Closes #665. As noted there, the issue arises from pads of (0, 0)
which subsequently lead to 0:0
slices in untiling, giving arrays with shape 0 for that dimension.
AFAICT, the only way to hit this corner case is when one of the dimensions is smaller than model_image_shape
, but the other is exactly model_image_shape
. If both dimensions are >= model_image_shape
, then padding
should be False
if I'm following the logic correctly.
Bugfix
This PR updates the three notebooks that are associated with the tracking paper to match our current scripts for training and running the application. I tested each notebook to verify that everything runs.
This PR updates the models used by the NuclearSegmentation
and CellTracking
applications to the latest versions associated with the upcoming paper.
Update codebase to reflect the adoption of ruff
instead of pylint
.
General cleanup.
This PR applies pyupgrade
(via ruff
) to automatically modernize some coding patterns. The way this works: you tell pyupgrade
what minimum version of Python you support (3.7 in our case), then it automatically applies linting patterns based on language features in the minimum supported version.
The changes here generally fall in the following categories:
from __future__
importsThe main improvement is removing cruft related to old Python versions - perhaps the most notable is the removal of the from __future__
imports and related del
s in the __init__.py
files. The automatic switch to f-strings is also (IMO) a nice improvement.
This PR updates the three notebooks that are associated with the tracking paper to match our current scripts for training and running the application. I tested each notebook to verify that everything runs.
Published by github-actions[bot] over 1 year ago
Main highlights:
Warning: This PR is dependent on a tracking release after merging https://github.com/vanvalenlab/deepcell-tracking/pull/108.
This should help catch incompatibilities between unreleased versions of libraries in the deepcell ecosystem.
deepcell-tf
depends on deepcell-tracking
and deepcell-toolbox
. If there is a change in one of these dependencies, there is no way to tell in the automated test running whether this will break something in deepcell-tf
until the underlying libraries are released. Testing against the dev branches will catch potential issues sooner, at the expense of being noisier and reducing test specificity (failures can originate from either deepcell-tf or the dependencies). Overall however I think this should improve the ability to things consistent across libraries.find . -type f -exec sed -i "s/2016-2022/2016-2023/g" {} \;
pydot
is listed as a dependency, but is not actually used in the project, so should be safe to remove.
Decreasing the dependency footprint is always beneficial. Doubly-so in the case of pydot
, which has not been actively maintained in a while, see e.g. networkx/networkx#5723
The builtin github actions checkout, setup-python, and cache have all been updated to a later version of node. There are now deprecation warnings for the previous versions in the actions logs.
General maintenance to keep the CI in good shape.
Note there may be other actions that need to be updated, but I'm starting with the main ones so I can see what remains in the logs after these updates.
My vote is to pin scikit-image then do a patch release. For the next minor release the pin should be updated to >=0.19.
A followup to #648. With the deprecation of export_model_to_tflite
, it is now the case that every function in the export_utils
module has been deprecated. Therefore I propose to deprecate the entire module. We can do this using the module getattr
to emit warnings if a user ever tries to access the two public names (i.e. export_model
or export_utils
).
In practice this means import patterns like:
>>> from deepcell.utils import export_model # or export utils
will now raise a deprecation warning as well.
The module getattr
was added in Python 3.7 - see PEP 562 for details.
Further cleanup related to the export_model
functions, all of which are deprecated in favor of using tf.keras.models.save_model
directly.
Deprecate export_utils.export_model_to_tflite
.
Notify users who may still be using this function to switch to tf.keras.models.save_model
. Closes gh-645.
Adopt ruff
as a linter for the project. See also: vanvalenlab/deepcell-toolbox#137 and vanvalenlab/deepcell-tracking#113.
Primarily to add automated linting for future code submissions, though this PR also contains a few minor fixups to address existing issues.
This one's a bit of a bear in terms of files touched and lines modified - I'm more than happy to split this up into smaller PRs to make review easier, just LMK!
Closes gh-643. If there's a reason not to use the nsc2
input, then alternatively we can delete that var.
See gh-643 for context.
Add support for Python 3.10
General software updates. See also vanvalenlab/deepcell-tracking#111 and vanvalenlab/deepcell-toolbox#128
Fixes the failures in deepcell-tf
for numpy v1.24 by updating to use the
Remove the upper bound on numpy.
Marking as draft for now, as this depends on vanvalenlab/deepcell-tracking#112 as well. The tracking tests in deepcell-tf
will continue to fail until those changes make it into a release.
The major change is removing the pins to sphinx, docutils, etc. AFAICT the motivating factors for the pins are no longer relevant - see e.g. #320 and #526.
Some other minor changes include:
Sphinx 2.3.1 is 2 major releases behind stable - being pinned this far back will make it difficult to reliably change/update the docs.
Removing blurb from README about running with Python2/TensorFlow 1.
The chances of this working out of a containerized environment is practically nil and certainly not worth the effort for users.
I'm also using this change as a test for the RTD docs preview feature in CI.
The major change is removing the pins to sphinx, docutils, etc. AFAICT the motivating factors for the pins are no longer relevant - see e.g. #320 and #526.
Some other minor changes include:
Sphinx 2.3.1 is 2 major releases behind stable - being pinned this far back will make it difficult to reliably change/update the docs.
Published by release-drafter[bot] almost 2 years ago
Resolve failures in deepcell due to code that depends on numpy features that were removed in numpy v1.24.
Updates the nuclear segmentation model from model-registry # 34 and the tracking model from model-registry # 36
Published by release-drafter[bot] about 2 years ago
Bump version number for new release
Also includes a change from m2r
to m2r2
for our documentation pipeline. m2r
is no longer being maintained so it has been replaced with a fork with more active maintenance. https://github.com/CrossNox/m2r2
Published by release-drafter[bot] over 2 years ago
Updated the post-processing parameters for the Mesmer model. Also updates the notebook to describe how post-processing can be modified.
The newly retrained model has different parameters that give the best results. In addition, I've gotten questions from a few different people about how to tweak the model output, having it in the notebook will make it easy for people to see the effects.
Published by release-drafter[bot] over 2 years ago
deepcell-tracking
to the new minor releaseDoc-string for format_output_mesmer
is now correctly saying "ValueError: if model output list is not len(4)"
Because format_output_mesmer
code diverged from the doc in what is expected length of model output list
Published by release-drafter[bot] over 2 years ago
Included functions to save datasets as tfrecords and load them into tf.data.Dataset objects
As our training datasets grow, it is becoming difficult to load full datasets into memory. By introducing support for tfrecords, we can load portions of datasets from disk on the fly during training.
BatchNormalization
or LayerNormalization
in GNNTrackingModel
The TF_VERSION build arg has to be updated manually
This PR updates tensorflow to 2.8 and drops support for python 3.6. The following changes were necessary to make this upgrade possible:
tensorflow.python.keras
to tensorflow.keras
which was a change introduced with tensorflow 2.6tensorflow.keras
to keras
: keras_parameterized
, conv_utils
, test_utils
I retrained the nuclear model in the model-registry using this branch of deepcell and the performance was comparable.
Published by release-drafter[bot] over 2 years ago
BatchNormalization
or LayerNormalization
in GNNTrackingModel
Published by release-drafter[bot] over 2 years ago
The TF_VERSION build arg has to be updated manually
Published by release-drafter[bot] over 2 years ago
This PR updates tensorflow to 2.8 and drops support for python 3.6. The following changes were necessary to make this upgrade possible:
tensorflow.python.keras
to tensorflow.keras
which was a change introduced with tensorflow 2.6tensorflow.keras
to keras
: keras_parameterized
, conv_utils
, test_utils
I retrained the nuclear model in the model-registry using this branch of deepcell and the performance was comparable.
To dos before reviewing:
Published by release-drafter[bot] over 2 years ago
Update the mesmer model from model-registry#15. I'll leave this PR as a draft until @ngreenwald has a chance to test post-processing parameters and determine if any changes are needed.
jinja2<3.1
in docs/requirements-docs.txt
jinja2
release deprecates functions that are required for sphinx
.All-
Someone in the OME community recently ran into some confusion with the licensing of deepcell-tf. The license badge lists "Apache 2.0" but the LICENSE file states "Modified Apache 2.0". This proposes use of to increase the clarity.
All the best,
~Josh Moore
--- LICENSE-2.0.txt 2022-04-01 14:41:12.806279265 +0100
+++ LICENSE 2022-04-01 14:41:45.921931426 +0100
@@ -1,7 +1,5 @@
-
- Apache License
+ Modified Apache License
Version 2.0, January 2004
- http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
@@ -65,18 +63,19 @@
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
+ this License, each Contributor hereby grants to You a non-commercial,
+ academic perpetual, worldwide, non-exclusive, no-charge, royalty-free,
+ irrevocable copyright license to reproduce, prepare Derivative Works
+ of, publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form. For any other
+ use, including commercial use, please contact: [email protected].
3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
+ this License, each Contributor hereby grants to You a non-commercial,
+ academic perpetual, worldwide, non-exclusive, no-charge, royalty-free,
+ irrevocable (except as stated in this section) patent license to make,
+ have made, use, offer to sell, sell, import, and otherwise transfer the
+ Work, where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
@@ -174,6 +173,10 @@
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
+ 10. Neither the name of Caltech nor the names of its contributors may be
+ used to endorse or promote products derived from this software without
+ specific prior written permission.
+
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
Published by github-actions[bot] over 2 years ago
Track
and concat_tracks
were imported from deepcell.data.tracking
deepcell/_version.py
to 2022GATConv
from spektral
The same models have been uploaded to the deepcell-models bucket in GCP for deployment through the kiosk.
setup.py
and requirements.txt
.Published by willgraf almost 3 years ago
Track
and concat_tracks
functions from deepcell-tracking
to deepcell.data.tracking
. They are really just .trk
preprocessing and are unnecessary outside of prepare_data
.temporal_slice
to not slice into padded frames.deepcell-tracking
to 0.5.0.temporal_slice
fix should improve the precision metrics of the model by not training on padded data.prepare_dataset
that allows it to process .trks files.filter_and_flatten
function.graph_layer
argument to the model call.docutils
to 0.16 to resolve readthedocs build failuresPublished by willgraf almost 3 years ago
numpy
version constraints to match setup.py
.Published by github-actions[bot] about 3 years ago
Track
and concat_tracks
functions from deepcell-tracking
to deepcell.data.tracking
. They are really just .trk
preprocessing and are unnecessary outside of prepare_data
.temporal_slice
to not slice into padded frames.deepcell-tracking
to 0.5.0.temporal_slice
fix should improve the precision metrics of the model by not training on padded data.Published by github-actions[bot] about 3 years ago
CellTracker
application.lr
arguments to learning_rate
.Published by github-actions[bot] about 3 years ago
featurenet.py
in the model_zoo
to a new 'tracking.py,` where a graph-based architecture for classification was also installed. This new model required 2 new "merge" layers and an additional loss function compatible with masking out portions of the adjacency matrix. The Tracking Application has been updated to take advantage of this new model and approach.Add new InferenceTimer
callback to measure inference time for a pre-defined number of samples:
Average inference speed per sample for 100 total samples: 0.00204s ± 0.00158s.
preprocessing_fn
and postprocessing_fn
as arguments to init
to allow them to be overridden when creating applications.Enables overriding the default pad_mode
to use "reflect" by default.
pad_mode
as a new argument to CytoplasmSegmentation.predict
with default value "reflect"
.Previously, the default values for the Mesmer post-processing kwargs were configured such that if the user passed any args to the app, all of the other defaults would be reset. This PR modifies the behavior so that only the kwargs specified by the user are changed, the other defaults remain unchanged
Removes confusing behavior where users would think they're only overriding a single arg, when in fact they would be resetting all of the args back to the deep_watershed
defaults, which are not the same as the Mesmer
defaults
graph_layer
argument, currently supports GCN and GCS.include_top
to True
by default in __create_semantic_head
include_top
from example notebook to avoid confusiondata_generators.__all__
.Modifies the applications to use internal batch prediction function rather than the default model.predict batching functionality. Closes #538
The default model.predict batch function creates a tf.dataset object with all images, not just the specified batch size. This leads to memory issues on the GPU when batch processing tiles from large images.
sed
to replace tensorflow~=
with tensorflow-cpu~=
. Including the ~
prevents other deps that start with the word tensorflow
from getting replaced.release-drafter
workflow.release-drafter
.release-drafter
workflow YAML file.pydot>=1.4.2,<2
to setup.py
and requirements.txt
graphviz
in the Dockerfile
tf.keras.utils.plot_model
out of the boxdeepcell-toolbox
version to 0.10.0deepcell.metrics
to remove old metrics importsMesmer
to use new combined deep_watershed