torchmetrics

Torchmetrics - Machine learning metrics for distributed, scalable PyTorch applications.

APACHE-2.0 License

Downloads
6.2M
Stars
2K
Committers
253

Bot releases are hidden (Show)

torchmetrics - Minor dependency correction Latest Release

Published by Borda 5 months ago

torchmetrics - Metrics for segmentation

Published by Borda 5 months ago

In Torchmetrics v1.4, we are happy to introduce a new domain of metrics to the library: segmentation metrics. Segmentation metrics are used to evaluate how well segmentation algorithms are performing, e.g., algorithms that take in an image and pixel-by-pixel decide what kind of object it is. These kind of algorithms are necessary in applications such as self driven cars. Segmentations are closely related to classification metrics, but for now, in Torchmetrics, expect the input to be formatted differently; see the documentation for more info. For now, MeanIoU and GeneralizedDiceScore have been added to the subpackage, with many more to follow in upcoming releases of Torchmetrics. We are happy to receive any feedback on metrics to add in the future or the user interface for the new segmentation metrics.

Torchmetrics v1.3 adds new metrics to the classification and image subpackage and has multiple bug fixes and other quality-of-life improvements. We refer to the changelog for the complete list of changes.

[1.4.0] - 2024-05-03

Added

  • Added SensitivityAtSpecificity metric to classification subpackage (#2217)
  • Added QualityWithNoReference metric to image subpackage (#2288)
  • Added a new segmentation metric:
    • MeanIoU (#1236)
    • GeneralizedDiceScore (#1090)
  • Added support for calculating segmentation quality and recognition quality in PanopticQuality metric (#2381)
  • Added pretty-errors for improving error prints (#2431)
  • Added support for torch.float weighted networks for FID and KID calculations (#2483)
  • Added zero_division argument to selected classification metrics (#2198)

Changed

  • Made __getattr__ and __setattr__ of ClasswiseWrapper more general (#2424)

Fixed

  • Fix getitem for metric collection when prefix/postfix is set (#2430)
  • Fixed axis names with Precision-Recall curve (#2462)
  • Fixed list synchronization with partly empty lists (#2468)
  • Fixed memory leak in metrics using list states (#2492)
  • Fixed bug in computation of ERGAS metric (#2498)
  • Fixed BootStrapper wrapper not working with kwargs provided argument (#2503)
  • Fixed warnings being suppressed in MeanAveragePrecision when requested (#2501)
  • Fixed corner-case in binary_average_precision when only negative samples are provided (#2507)

Key Contributors

@baskrahmer, @Borda, @ChristophReich1996, @daniel-code, @furkan-celik, @i-aki-y, @jlcsilva, @NielsRogge, @oguz-hanoglu, @SkafteNicki, @ywchan2005

New Contributors

If we forgot someone due to not matching commit email with GitHub account, let us know :]


Full Changelog: https://github.com/Lightning-AI/torchmetrics/compare/v1.3.0...v1.4.0

torchmetrics - Minor patch release

Published by Borda 7 months ago

[1.3.2] - 2024-03-18

Fixed

  • Fixed negative variance estimates in certain image metrics (#2378)
  • Fixed dtype being changed by deepspeed for certain regression metrics (#2379)
  • Fixed plotting of metric collection when prefix/postfix is set (#2429)
  • Fixed bug when top_k>1 and average="macro" for classification metrics (#2423)
  • Fixed case where label prediction tensors in classification metrics were not validated correctly (#2427)
  • Fixed how auc scores are calculated in PrecisionRecallCurve.plot methods (#2437)

Full Changelog: https://github.com/Lightning-AI/torchmetrics/compare/v1.3.1...v1.3.2

Key Contributors

@Borda, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Minor patch release

Published by Borda 8 months ago

[1.3.1] - 2024-02-12

Fixed

  • Fixed how backprop is handled in LPIPS metric (#2326)
  • Fixed MultitaskWrapper not being able to be logged in lightning when using metric collections (#2349)
  • Fixed high memory consumption in Perplexity metric (#2346)
  • Fixed cached network in FeatureShare not being moved to the correct device (#2348)
  • Fix naming of statistics in MeanAveragePrecision with custom max det thresholds (#2367)
  • Fixed custom aggregation in retrieval metrics (#2364)
  • Fixed initialize aggregation metrics with default floating type (#2366)
  • Fixed plotting of confusion matrices (#2358)

Full Changelog: https://github.com/Lightning-AI/torchmetrics/compare/v1.3.0...v1.3.1

Key Contributors

@Borda, @fschlatt, @JonasVerbickas, @nsmlzl, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Minor package patch

Published by Borda 9 months ago

torchmetrics - Minor package patch

Published by Borda 9 months ago

torchmetrics - New Image metrics & wrappers

Published by Borda 9 months ago

[1.3.0] - 2024-01-10

Added

  • Added more tokenizers for SacreBLEU metric (#2068)
  • Added support for logging MultiTaskWrapper directly with lightnings log_dict method (#2213)
  • Added FeatureShare wrapper to share submodules containing feature extractors between metrics (#2120)
  • Added new metrics to image domain:
    • SpatialDistortionIndex (#2260)
    • Added CriticalSuccessIndex (#2257)
    • Spatial Correlation Coefficient (#2248)
  • Added average argument to multiclass versions of PrecisionRecallCurve and ROC (#2084)
  • Added confidence scores when extended_summary=True in MeanAveragePrecision (#2212)
  • Added RetrievalAUROC metric (#2251)
  • Added aggregate argument to retrieval metrics (#2220)
  • Added utility functions in segmentation.utils for future segmentation metrics (#2105)

Changed

  • Changed minimum supported Pytorch version from 1.8 to 1.10 (#2145)
  • Changed x-/y-axis order for PrecisionRecallCurve to be consistent with scikit-learn (#2183)

Deprecated

  • Deprecated metric._update_called (#2141)
  • Deprecated specicity_at_sensitivity in favour of specificity_at_sensitivity (#2199)

Fixed

  • Fixed support for half precision + CPU in metrics requiring topk operator (#2252)
  • Fixed warning incorrectly being raised in Running metrics (#2256)
  • Fixed integration with custom feature extractor in FID metric (#2277)

Full Changelog: https://github.com/Lightning-AI/torchmetrics/compare/v1.2.0...v1.3.0

Key Contributors

@Borda, @HoseinAkbarzadeh, @matsumotosan, @miskfi, @oguz-hanoglu, @SkafteNicki, @stancld, @ywchan2005

New Contributors

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Lazy imports

Published by Borda 11 months ago

[1.2.1] - 2023-11-30

Added

  • Added error if NoTrainInceptionV3 is being initialized without torch-fidelity not being installed (#2143)
  • Added support for Pytorch v2.1 (#2142)

Changed

  • Change default state of SpectralAngleMapper and UniversalImageQualityIndex to be tensors (#2089)
  • Use arange and repeat for deterministic bincount (#2184)

Removed

  • Removed unused lpips third-party package as dependency of LearnedPerceptualImagePatchSimilarity metric (#2230)

Fixed

  • Fixed numerical stability bug in LearnedPerceptualImagePatchSimilarity metric (#2144)
  • Fixed numerical stability issue in UniversalImageQualityIndex metric (#2222)
  • Fixed incompatibility for MeanAveragePrecision with pycocotools backend when too little max_detection_thresholds are provided (#2219)
  • Fixed support for half precision in Perplexity metric (#2235)
  • Fixed device and dtype for LearnedPerceptualImagePatchSimilarity functional metric (#2234)
  • Fixed bug in Metric._reduce_states(...) when using dist_sync_fn="cat" (#2226)
  • Fixed bug in CosineSimilarity where 2d is expected but 1d input was given (#2241)
  • Fixed bug in MetricCollection when using compute groups and compute is called more than once (#2211)

Full Changelog: https://github.com/Lightning-AI/torchmetrics/compare/v1.2.0...v1.2.1

Key Contributors

@Borda, @jankng, @kyle-dorman, @SkafteNicki, @tanguymagne

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Clustering metrics

Published by Borda about 1 year ago

Torchmetrics v1.2 is out now! The latest release includes 11 new metrics within a new subdomain: Clustering.
In this blog post, we briefly explain what clustering is, why it’s a useful measure and newly added metrics that can be used with code samples.

Clustering - what is it?

Clustering is an unsupervised learning technique. The term unsupervised here refers to the fact that we do not have ground truth targets as we do in classification. The primary goal of clustering is to discover hidden patterns or structures within data without prior knowledge about the meaning or importance of particular features. Thus, clustering is a form of data exploration compared to supervised learning, where the goal is “just” to predict if a data point belongs to one class.

The key goal of clustering algorithms is to split data into clusters/sets where data points from the same cluster are more similar to each other than any other points from the remaining clusters. Some of the most common and widely used clustering algorithms are K-Means, Hierarchical clustering, and Gaussian Mixture Models (GMM).

An objective quality evaluation/measure is required regardless of the clustering algorithm or internal optimization criterion used. In general, we can divide all clustering metrics into two categories: extrinsic metrics and intrinsic metrics.

Extrinsic metrics

Extrinsic metrics are characterized by requirements of some ground truth labeling, even if used for an unsupervised method. This may seem counter-intuitive at first as we, by clustering definition, do not use such ground truth labeling. However, most clustering algorithms are still developed on datasets with labels available, so these metrics use this fact as an advantage.

Intrinsic metrics

In contrast, intrinsic metrics do not need any ground truth information. These metrics estimate inter-cluster consistency (cohesion of all points assigned to a single set) compared to other clusters (separation). This is often done by comparing the distance in the embedding space.

Update to Mean Average Precision

MeanAveragePrecision, the most widely used metric for object detection in computer vision, now supports two new arguments: average and backend.

  • The average argument controls averaging over multiple classes. By the core definition, the default way is macro averaging, where the metric is calculated for each class separately and then averaged together. This will continue to be the default in Torchmetrics, but now we also support the setting average="micro". Every object under this setting is essentially considered to be the same class, and the returned value is, therefore, calculated simultaneously over all objects.

  • The second argument - backend, is important, as it indicates what computational backend will be used for the internal computations. Since MeanAveragePrecision is not a simple metric to compute, and we value the correctness of our metric, we rely on some third-party library to do the internal computations. By default, we rely on users to have the official pycocotools installed, but with the new argument, we will also be supporting other backends.

[1.2.0] - 2023-09-22

Added

  • Added metric to cluster package:
    • MutualInformationScore (#2008)
    • RandScore (#2025)
    • NormalizedMutualInfoScore (#2029)
    • AdjustedRandScore (#2032)
    • CalinskiHarabaszScore (#2036)
    • DunnIndex (#2049)
    • HomogeneityScore (#2053)
    • CompletenessScore (#2053)
    • VMeasureScore (#2053)
    • FowlkesMallowsIndex (#2066)
    • AdjustedMutualInfoScore (#2058)
    • DaviesBouldinScore (#2071)
  • Added backend argument to MeanAveragePrecision (#2034)

Full Changelog: https://github.com/Lightning-AI/torchmetrics/compare/v1.1.0...v1.2.0

New Contributors since v1.1.0

Key Contributors

@matsumotosan, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Weekly patch release

Published by Borda about 1 year ago

[1.1.2] - 2023-09-11

Fixed

  • Fixed tie breaking in ndcg metric (#2031)
  • Fixed bug in BootStrapper when very few samples were evaluated that could lead to crash (#2052)
  • Fixed bug when creating multiple plots that lead to not all plots being shown (#2060)
  • Fixed performance issues in RecallAtFixedPrecision for large batch sizes (#2042)
  • Fixed bug related to MetricCollection used with custom metrics have prefix/postfix attributes (#2070)

Contributors

@GlavitsBalazs, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Weekly patch release

Published by Borda about 1 year ago

[1.1.1] - 2023-08-29

Added

  • Added average argument to MeanAveragePrecision (#2018)

Fixed

  • Fixed bug in PearsonCorrCoef is updated on single samples at a time (#2019)
  • Fixed support for pixel-wise MSE (#2017)
  • Fixed bug in MetricCollection when used with multiple metrics that return dicts with same keys (#2027)
  • Fixed bug in detection intersection metrics when class_metrics=True resulting in wrong values (#1924)
  • Fixed missing attributes higher_is_better, is_differentiable for some metrics (#2028)

Contributors

@adamjstewart, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Into Generative AI

Published by Borda about 1 year ago

In version v1.1 of Torchmetrics, in total five new metrics have been added, bringing the total number of metrics up to 128! In particular, we have two new exciting metrics for evaluating your favorite generative models for images.

Perceptual Path length

Introduced in the famous StyleGAN paper back in 2018 the Perceptual path length metric is used to quantify how smoothly a generator manages to interpolate between points in its latent space.
Why does the smoothness of the latent space of your generative model matter? Assume you find an image at some point in your latent space that generates an image you like, but you would like to see if you could find a better one if you slightly change the latent point it was generated from. If your latent space could be smoother, this because very hard because even small changes to the latent point can lead to large changes in the generated image.

CLIP image quality assessment

CLIP image quality assessment (CLIPIQA) is a very recently proposed metric in this paper. The metrics build on the OpenAI CLIP model, which is a multi-modal model for connecting text and images. The core idea behind the metric is that different properties of an image can be assessed by measuring how similar the CLIP embedding of the image is to the respective CLIP embedding of a positive and negative prompt for that given property.

VIF, Edit, and SA-SDR

  • VisualInformationFidelity has been added to the image package. The first proposed in this paper can be used to automatically assess the quality of images in a perceptual manner.

  • EditDistance have been added to the text package. A very classical metric for text that simply measures the amount of characters that need to be substituted, inserted, or deleted, to transform the predicted text into the reference text.

  • SourceAggregatedSignalDistortionRatio has been added to the audio package. Metric was originally proposed in this paper and is an improvement over the classical Signal-to-Distortion Ratio (SDR) metric (also found in torchmetrics) that provides more stable gradients during training when trying to train models for style source separation.

[1.1.0] - 2022-08-22

Added

  • Added source aggregated signal-to-distortion ratio (SA-SDR) metric (#1882
  • Added VisualInformationFidelity to image package (#1830)
  • Added EditDistance to text package (#1906)
  • Added top_k argument to RetrievalMRR in retrieval package (#1961)
  • Added support for evaluating "segm" and "bbox" detection in MeanAveragePrecision at the same time (#1928)
  • Added PerceptualPathLength to image package (#1939)
  • Added support for multioutput evaluation in MeanSquaredError (#1937)
  • Added argument extended_summary to MeanAveragePrecision such that precision, recall, iou can be easily returned (#1983)
  • Added warning to ClipScore if long captions are detected and truncate (#2001)
  • Added CLIPImageQualityAssessment to multimodal package (#1931)
  • Added new property metric_state to all metrics for users to investigate currently stored tensors in memory (#2006)

Full Changelog: https://github.com/Lightning-AI/torchmetrics/compare/v1.0.0...v1.1.0


New Contributors since v1.0.0

Contributors

@bojobo, @lucadiliello, @quancs, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Weekly patch release

Published by Borda about 1 year ago

[1.0.3] - 2022-08-08

Added

  • Added warning to MeanAveragePrecision if too many detections are observed (#1978)

Fixed

  • Fix support for int input for when multidim_average="samplewise" in classification metrics (#1977)
  • Fixed x/y labels when plotting confusion matrices (#1976)
  • Fixed IOU compute in cuda (#1982)

Contributors

@borda, @SkafteNicki^n, @Vivswan

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Weekly patch release

Published by Borda about 1 year ago

[1.0.2] - 2022-08-03

Added

  • Added warning to PearsonCorrCoeff if input has a very small variance for its given dtype (#1926)

Changed

  • Changed all non-task specific classification metrics to be true subtypes of Metric (#1963)

Fixed

  • Fixed bug in CalibrationError where calculations for double precision input was performed in float precision (#1919)
  • Fixed bug related to the prefix/postfix arguments in MetricCollection and ClasswiseWrapper being duplicated (#1918)
  • Fixed missing AUC score when plotting classification metrics that support the score argument (#1948)

Contributors

@borda, @SkafteNicki^n

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Weekly patch release

Published by Borda over 1 year ago

[1.0.1] - 2022-07-13

Fixed

  • Fixes corner case when using MetricCollection together with aggregation metrics (#1896)
  • Fixed the use of max_fpr in AUROC metric when only one class is present (#1895)
  • Fixed bug related to empty predictions for IntersectionOverUnion metric (#1892)
  • Fixed bug related to MeanMetric and broadcasting of weights when Nans are present (#1898)
  • Fixed bug related to expected input format of pycoco in MeanAveragePrecision (#1913)

Contributors

@fansuregrin, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Visualize metrics

Published by Borda over 1 year ago

We are happy to announce that the first major release of Torchmetrics, version v1.0, is publicly available. We have
worked hard on a couple of new features for this milestone release, but for v1.0.0, we have also managed to implement
over 100 metrics in torchmetrics.

Plotting

The big new feature of v1.0 is a built-in plotting feature. As the old saying goes: "A picture is worth a thousand words". Within machine learning, this is definitely also true for many things.
Metrics are one area that, in some cases, is definitely better showcased in a figure than as a list of floats. The only requirement for getting started with the plotting feature is installing matplotlib. Either install with pip install matplotlib or pip install torchmetrics[visual] (the latter option also installs Scienceplots and uses that as the default plotting style).

The basic interface is the same for any metric. Just call the new .plot method:

metric = AnyMetricYouLike()
for _ in range(num_updates):
    metric.update(preds[i], target[i])
fig, ax = metric.plot()

The plot method by default does not require any arguments and will automatically call metric.compute internally on
whatever metric states have been accumulated.

[1.0.0] - 2022-07-04

Added

  • Added prefix and postfix arguments to ClasswiseWrapper (#1866)
  • Added speech-to-reverberation modulation energy ratio (SRMR) metric (#1792, #1872)
  • Added new global arg compute_with_cache to control caching behaviour after compute method (#1754)
  • Added ComplexScaleInvariantSignalNoiseRatio for audio package (#1785)
  • Added Running wrapper for calculate running statistics (#1752)
  • AddedRelativeAverageSpectralError and RootMeanSquaredErrorUsingSlidingWindow to image package (#816)
  • Added support for SpecificityAtSensitivity Metric (#1432)
  • Added support for plotting of metrics through .plot() method (#1328, #1481, #1480, #1490, #1581, #1585, #1593, #1600, #1605, #1610, #1609, #1621, #1624, #1623, #1638, #1631, #1650, #1639, #1660, #1682, #1786)
  • Added support for plotting of audio metrics through .plot() method (#1434)
  • Added classes to output from MAP metric (#1419)
  • Added Binary group fairness metrics to classification package (#1404)
  • Added MinkowskiDistance to regression package (#1362)
  • Added pairwise_minkowski_distance to pairwise package (#1362)
  • Added new detection metric PanopticQuality (#929, #1527)
  • Added PSNRB metric (#1421)
  • Added ClassificationTask Enum and use in metrics (#1479)
  • Added ignore_index option to exact_match metric (#1540)
  • Add parameter top_k to RetrievalMAP (#1501)
  • Added support for deterministic evaluation on GPU for metrics that uses torch.cumsum operator (#1499)
  • Added support for plotting of aggregation metrics through .plot() method (#1485)
  • Added support for python 3.11 (#1612)
  • Added support for auto clamping of input for metrics that uses the data_range (#1606)
  • Added ModifiedPanopticQuality metric to detection package (#1627)
  • Added PrecisionAtFixedRecall metric to classification package (#1683)
  • Added multiple metrics to detection package (#1284)
    • IntersectionOverUnion
    • GeneralizedIntersectionOverUnion
    • CompleteIntersectionOverUnion
    • DistanceIntersectionOverUnion
  • Added MultitaskWrapper to wrapper package (#1762)
  • Added RelativeSquaredError metric to regression package (#1765)
  • Added MemorizationInformedFrechetInceptionDistance metric to image package (#1580)

Changed

  • Changed permutation_invariant_training to allow using a 'permutation-wise' metric function (#1794)
  • Changed update_count and update_called from private to public methods (#1370)
  • Raise exception for invalid kwargs in Metric base class (#1427)
  • Extend EnumStr raising ValueError for invalid value (#1479)
  • Improve speed and memory consumption of binned PrecisionRecallCurve with large number of samples (#1493)
  • Changed __iter__ method from raising NotImplementedError to TypeError by setting to None (#1538)
  • FID metric will now raise an error if too few samples are provided (#1655)
  • Allowed FID with torch.float64 (#1628)
  • Changed LPIPS implementation to no more rely on third-party package (#1575)
  • Changed FID matrix square root calculation from scipy to torch (#1708)
  • Changed calculation in PearsonCorrCoeff to be more robust in certain cases (#1729)
  • Changed MeanAveragePrecision to pycocotools backend (#1832)

Deprecated

  • Deprecated domain metrics import from package root (#1685, #1694, #1696, #1699, #1703)

Removed

  • Support for python 3.7 (#1640)

Fixed

  • Fixed support in MetricTracker for MultioutputWrapper and nested structures (#1608)
  • Fixed restrictive check in PearsonCorrCoef (#1649)
  • Fixed integration with jsonargparse and LightningCLI (#1651)
  • Fixed corner case in calibration error for zero confidence input (#1648)
  • Fix precision-recall curve based computations for float target (#1642)
  • Fixed missing kwarg squeeze in MultiOutputWrapper (#1675)
  • Fixed padding removal for 3d input in MSSSIM (#1674)
  • Fixed max_det_threshold in MAP detection (#1712)
  • Fixed states being saved in metrics that use register_buffer (#1728)
  • Fixed states not being correctly synced and device transfered in MeanAveragePrecision for iou_type="segm" (#1763)
  • Fixed use of prefix and postfix in nested MetricCollection (#1773)
  • Fixed ax plotting logging in `MetricCollection (#1783)
  • Fixed lookup for punkt sources being downloaded in RougeScore (#1789)
  • Fixed integration with lightning for CompositionalMetric (#1761)
  • Fixed several bugs in SpectralDistortionIndex metric (#1808)
  • Fixed bug for corner cases in MatthewsCorrCoef (#1812, #1863)
  • Fixed support for half precision in PearsonCorrCoef (#1819)
  • Fixed number of bugs related to average="macro" in classification metrics (#1821)
  • Fixed off-by-one issue when ignore_index = num_classes + 1 in Multiclass-jaccard (#1860)

New Contributors

Contributors

@alexkrz, @AndresAlgaba, @basveeling, @Bomme, @Borda, @Callidior, @clueless-skywatcher, @Dibz15, @EPronovost, @fkroeber, @ItamarChinn, @marcocaccin, @martinmeinke, @niberger, @Piyush-97, @quancs, @relativityhd, @shenoynikhil, @shhs29, @SkafteNicki, @soma2000-lang, @srishti-git1110, @stancld, @twsl, @ValerianRey, @venomouscyanide, @wbeardall

If we forgot someone due to not matching commit email with GitHub account, let us know :]

torchmetrics - Minor patch release

Published by Borda over 1 year ago

[0.11.4] - 2023-03-10

Fixed

  • Fixed evaluation of R2Score with the near constant target (#1576)
  • Fixed dtype conversion when the metric is submodule (#1583)
  • Fixed bug related to top_k>1 and ignore_index!=None in StatScores based metrics (#1589)
  • Fixed corner case for PearsonCorrCoef when running in DDP mode but only on a single device (#1587)
  • Fixed overflow error for specific cases in MAP when big areas are calculated (#1607)

Contributors

@borda, @FarzanT, @SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Full Changelog: https://github.com/Lightning-AI/metrics/compare/v0.11.3...v0.11.4

torchmetrics - Minor patch release

Published by Borda over 1 year ago

[0.11.3] - 2023-02-28

Fixed

  • Fixed classification metrics for byte input (#1521)
  • Fixed the use of ignore_index in MulticlassJaccardIndex (#1386)

Contributors

@SkafteNicki, @vincentvaroquauxads

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Full Changelog: https://github.com/Lightning-AI/metrics/compare/v0.11.2...v0.11.3

torchmetrics - Minor patch release

Published by Borda over 1 year ago

[0.11.2] - 2023-02-21

Fixed

  • Fixed compatibility between XLA in _bincount function (#1471)
  • Fixed type hints in methods belonging to MetricTracker wrapper (#1472)
  • Fixed multilabel in ExactMatch (#1474)

Contributors

@7shoe, @borda, @SkafteNicki, @ValerianRey

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Full Changelog: https://github.com/Lightning-AI/metrics/compare/v0.11.1...v0.11.2

torchmetrics - Minor patch release

Published by Borda over 1 year ago

[0.11.1] - 2023-01-30

Fixed

  • Fixed type checking on the maximize parameter at the initialization of MetricTracker (#1428)
  • Fixed mixed precision auto-cast for SSIM metric (#1454)
  • Fixed checking for nltk.punkt in RougeScore if a machine is not online (#1456)
  • Fixed wrongly reset method in MultioutputWrapper (#1460)
  • Fixed dtype checking in PrecisionRecallCurve for target tensor (#1457)

Contributors

@borda, @SkafteNicki, @stancld

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Full Changelog: https://github.com/Lightning-AI/metrics/compare/v0.11.0...v0.11.1

Package Rankings
Top 0.91% on Pypi.org
Badges
Extracted from project README