seqeval

A Python framework for sequence labeling evaluation(named-entity recognition, pos tagging, etc...)

MIT License

Downloads
536.8K
Stars
1.1K
Committers
11

Bot releases are hidden (Show)

seqeval - v1.2.2 Latest Release

Published by Hironsan almost 4 years ago

Update setup.py to relax version pinning, fix #65

seqeval - v1.2.1

Published by Hironsan about 4 years ago

In strict mode, speed up the evaluation.
About 13 times faster.

Fixes #62

seqeval - v1.2.0

Published by Hironsan about 4 years ago

Enable to compute macro/weighted/perClass f1, recall, and precision #61

F1 score

>>> from seqeval.metrics import f1_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> f1_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> f1_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> f1_score(y_true, y_pred, average='macro')
0.75
>>> f1_score(y_true, y_pred, average='weighted')
0.6666666666666666

Precision

>>> from seqeval.metrics import precision_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> precision_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> precision_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> precision_score(y_true, y_pred, average='macro')
0.75
>>> precision_score(y_true, y_pred, average='weighted')
0.6666666666666666

Recall

>>> from seqeval.metrics import recall_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> recall_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> recall_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> recall_score(y_true, y_pred, average='macro')
0.75
>>> recall_score(y_true, y_pred, average='weighted')
0.6666666666666666
seqeval - v1.1.1

Published by Hironsan about 4 years ago

Add length check to classification_report v1 #59

seqeval - v1.1.0

Published by Hironsan about 4 years ago

Add BILOU as a scheme #56

seqeval - v1.0.0

Published by Hironsan about 4 years ago

In some cases, the behavior of the current classification_report is not enough. In the new classification_report, we can specify the evaluation scheme explicitly. This resolved the following issues:

Fix #23
Fix #25
Fix #35
Fix #36
Fix #39

seqeval - v0.0.19

Published by Hironsan about 4 years ago

classification_report outputs string/dict as requested in issue #41 #51

seqeval - v0.0.18

Published by Hironsan about 4 years ago

Stop raising exception when get_entities takes a non-NE input #50

seqeval - v0.0.17

Published by Hironsan about 4 years ago

Update validation to fix #46 #47

seqeval - v0.0.16

Published by Hironsan about 4 years ago

Fix for classification report when tag contain dashes in their names or no tag #38

seqeval - v0.0.15

Published by Hironsan about 4 years ago

Add weighted average #32

seqeval - v0.0.14

Published by Hironsan about 4 years ago

Add validation for an input #30

seqeval - v0.0.13

Published by Hironsan about 4 years ago

seqeval - v0.0.12

Published by Hironsan over 5 years ago

Support both pre- and post- padding. See https://github.com/chakki-works/seqeval/pull/13.

seqeval - v0.0.11

Published by Hironsan over 5 years ago

Package Rankings
Top 1.44% on Pypi.org
Top 6.21% on Proxy.golang.org
Top 18.66% on Spack.io
Top 23.97% on Conda-forge.org
Related Projects