deepmd-kit

A deep learning package for many-body potential energy representation and molecular dynamics

LGPL-3.0 License

Downloads
3.3K
Stars
1.4K
Committers
63

Bot releases are hidden (Show)

deepmd-kit - v2.2.10 Latest Release

Published by njzjz 7 months ago

What's Changed

New features

Enhancement

Documentation

Bugfix

CI/CD

Dependency update

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.9...v2.2.10

deepmd-kit - v3.0.0a0

Published by njzjz 8 months ago

DeePMD-kit v3: A multiple-backend framework for deep potentials

We are excited to announce the first alpha version of DeePMD-kit v3. DeePMD-kit v3 allows you to train and run deep potential models on top of TensorFlow or PyTorch. DeePMD-kit v3 also supports the DPA-2 model, a novel architecture for large atomic models.

Highlights

Multiple-backend framework

image

DeePMD-kit v3 adds a pluggable multiple-backend framework to provide consistent training and inference experiences between different backends. You can:

  • Use the same training data and the input script to train a deep potential model with different backends. Switch backends based on efficiency, functionality, or convenience:
# Training a model using the TensorFlow backend
dp --tf train input.json
dp --tf freeze

# Training a mode using the PyTorch backend
dp --pt train input.json
dp --pt freeze
  • Use any model to perform inference via any existing interfaces, including dp test, Python/C++/C interface, and third-party packages (dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc). Take an example on LAMMPS:
# run LAMMPS with a TensorFlow backend model
pair_style deepmd frozen_model.pb
# run LAMMPS with a PyTorch backend model
pair_style deepmd frozen_model.pth
# Calculate model deviation using both models
pair_style deepmd frozen_model.pb frozen_model.pth out_file md.out out_freq 100
  • Convert models between backends, using dp convert-backend, if both backends support a model:
dp convert-backend frozen_model.pb frozen_model.pth
dp convert-backend frozen_model.pth frozen_model.pb
  • Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.

PyTorch backend: a backend designed for large atomic models and new research

We added the PyTorch backend in DeePMD-kit v3 to support the development of new models, especially for large atomic models.

DPA-2 model: Towards a universal large atomic model for molecular and material simulation

DPA-2 model is a novel architecture for Large Atomic Model (LAM) and can accurately represent a diverse range of chemical systems and materials, enabling high-quality simulations and predictions with significantly reduced efforts compared to traditional methods. The DPA-2 model is only implemented in the PyTorch backend. An example configuration is in the examples/water/dpa2 directory.

The DPA-2 descriptor includes two primary components: repinit and repformer. The detailed architecture is shown in the following figure.

DPA-2

Training strategies for large atomic models

The PyTorch backend has supported multiple training strategies to develop large atomic models.

Parallel training: Large atomic models have a number of hyper-parameters and complex architecture, so training a model on multiple GPUs is necessary. Benefiting from the PyTorch community ecosystem, the parallel training for the PyTorch backend can be driven by torchrun, a launcher for distributed data parallel.

torchrun --nproc_per_node=4 --no-python dp --pt train input.json

Multi-task training: Large atomic models are trained against data in a wide scope and at different DFT levels, which requires multi-task training. The PyTorch backend supports multi-task training, sharing the descriptor between different An example is given in examples/water_multi_task/pytorch_example/input_torch.json.

Finetune: Fine-tune is useful to train a pre-train large model on a smaller, task-specific dataset. The PyTorch backend has supported --finetune argument in the dp --pt train command line.

Developing new models using Python and dynamic graphs

Researchers may feel pain about the static graph and the custom C++ OPs from the TensorFlow backend, which sacrifices research convenience for computational performance. The PyTorch backend has a well-designed code structure written using the dynamic graph, which is currently 100% written with the Python language, making extending and debugging new deep potential models easier than the static graph.

Supporting traditional deep potential models

People may still want to use the traditional models already supported by the TensorFlow backend in the PyTorch backend and compare the same model among different backends. We almost rewrote all of the traditional models in the PyTorch backend, which are listed below:

  • Features supported:
    • Descriptor: se_e2_a, se_e2_r, se_atten, hybrid;
    • Fitting: energy, dipole, polar, fparam/apram support
    • Model: standard, DPRc
    • Python inference interface
    • C++ inference interface for energy only
    • TensorBoard
  • Features not supported yet:
    • Descriptor: se_e3, se_atten_v2, se_e2_a_mask
    • Fitting: dos
    • Model: linear_ener, DPLR, pairtab, linear_ener, frozen, pairwise_dprc, ZBL, Spin
    • Model compression
    • Python inference interface for DPLR
    • C++ inference interface for tensors and DPLR
    • Paralleling training using Horovod
  • Features not planned:
    • Descriptor: loc_frame, se_e2_a + type embedding, se_a_ebd_v2
    • NVNMD

[!WARNING]
As part of an alpha release, the PyTorch backend's API or user input arguments may change before the first stable version.

DP backend and format: reference backend for other backends

DP is a reference backend for development that uses pure NumPy to implement models without using any heavy deep-learning frameworks. It cannot be used for training but only for Python inference. As a reference backend, it is not aimed at the best performance but only the correct results. The DP backend uses HDF5 to store model serialization data, which is backend-independent.
The DP backend and the serialization data are used in the unit test to ensure different backends have consistent results and can be converted between each other.
In the current version, the DP backend has a similar supporting status to the PyTorch backend, while DPA-1 and DPA-2 are not supported yet.

Authors

The above highlights were mainly contributed by

  • Hangrui Bi (@20171130), in #3180
  • Chun Cai (@caic99), in #3180
  • Junhan Chang (@TablewareBox), in #3180
  • Yiming Du (@nahso), in #3180
  • Guolin Ke (@guolinke), in #3180
  • Xinzijian Liu (@zjgemi), in #3180
  • Anyang Peng (@anyangml), in #3362, #3192, #3212, #3210, #3248, #3266, #3281, #3296, #3309, #3314, #3321, #3327, #3338, #3351, #3376, #3385
  • Xuejian Qin (@qin2xue3jian4), in #3180
  • Han Wang (@wanghan-iapcm), in #3188, #3190, #3208, #3184, #3199, #3202, #3219, #3225, #3232, #3235, #3234, #3241, #3240, #3246, #3260, #3274, #3268, #3279, #3280, #3282, #3295, #3289, #3340, #3352, #3357, #3389, #3391, #3400
  • Jinzhe Zeng (@njzjz), in #3171, #3173, #3174, #3179, #3193, #3200, #3204, #3205, #3333, #3360, #3364, #3365, #3169, #3164, #3175, #3176, #3187, #3186, #3191, #3195, #3194, #3196, #3198, #3201, #3207, #3226, #3222, #3220, #3229, #3226, #3239, #3228, #3244, #3243, #3213, #3249, #3250, #3254, #3247, #3253, #3271, #3263, #3258, #3276, #3285, #3286, #3292, #3294, #3293, #3303, #3304, #3308, #3307, #3306, #3316, #3315, #3318, #3323, #3325, #3332, #3331, #3330, #3339, #3335, #3346, #3349, #3350, #3310, #3356, #3361, #3342, #3348, #3358, #3366, #3374, #3370, #3373, #3377, #3382, #3383, #3384, #3386, #3390, #3395, #3394, #3396, #3397
  • Chengqian Zhang (@Chengqian-Zhang), in #3180
  • Duo Zhang (@iProzd), in #3180, #3203, #3245, #3261, #3262, #3355, #3367, #3359, #3371, #3387, #3388, #3380, #3378
  • Xiangyu Zhang (@CaRoLZhangxy), in #3162, #3287, #3337, #3375, #3379

Breaking changes

  • Python 3.7 support is dropped. by @njzjz in #3185
  • We require all model files to have the correct filename extension for all interfaces so a corresponding backend can load them. TensorFlow model files must end with .pb extension.
  • Python class DeepTensor (including DeepDiople and DeepPolar) now returns atomic tensor in the dimension of natoms instead of nsel_atoms. by @njzjz in #3390
  • For developers: the Python module structure is fully refactored. The old deepmd module was moved to deepmd.tf without other API changes, and deepmd_utils was moved to deepmd without other API changes. by @njzjz in #3177, #3178

Other changes

Enhancement

  • Neighbor stat for the TensorFlow backend is 80x accelerated. by @njzjz in #3275
  • i-PI: remove normalize_coord by @njzjz in #3257
  • LAMMPS: fix_dplr.cpp delete redundant setup and set atom->image when pre_force by @shiruosong in #3344, #3345
  • Bump scikit-build-core to 0.8 by @njzjz in #3369
  • Bump LAMMPS to stable_2Aug2023_update3 by @njzjz in #3399
  • Add fparam/aparam support for fine-tune by @njzjz in #3313
  • TF: remove freeze warning for optional nodes by @njzjz in #3381

CI/CD

  • Build macos-arm64 wheel on M1 runners by @njzjz in #3206
  • Other improvements and fixes to GitHub Actions by @njzjz in #3238, #3283, #3284, #3288, #3290, #3326
  • Enable docstring code format by @njzjz in #3267

Bugfix

  • Fix TF 2.16 compatibility by @njzjz in #3343
  • Detect version in advance before building deepmd-kit-cu11 by @njzjz in #3172
  • C API: change the required shape of electric field to nloc * 3 by @njzjz in #3237

New Contributors

  • @anyangml made their first contribution in #3192
  • @shiruosong made their first contribution in #3344

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.8...v3.0.0a0

deepmd-kit - v2.2.9

Published by njzjz 9 months ago

What's Changed

Bugfixes

  • cc: fix returning type of sel_types by @njzjz in #3181
  • fix compile gromacs with precompiled C library by @njzjz in #3217
  • gmx: fix include directive by @njzjz in #3221
  • c: fix all memory leaks; add sanitizer checks in #3223

CI/CD

  • build macos-arm64 wheel on M1 runners by @njzjz in #3206

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.8...v2.2.9

deepmd-kit - v2.2.8

Published by njzjz 9 months ago

What's Changed

Breaking Changes

New Features

Enhancement

Documentation

Build and release

Bug fixings

CI/CD

Code refactor and enhancement to prepare for upcoming v3

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.7...v2.2.8

deepmd-kit - v2.2.7

Published by wanghan-iapcm 12 months ago

New features

Enhancement

Build and release

Bug fix

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.6...v2.2.7

deepmd-kit - v2.2.6

Published by njzjz about 1 year ago

We list critical bugs in previous versions in https://github.com/deepmodeling/deepmd-kit/issues/2866.

New features

Enhancement

Bugfixes

CI/CD

Documentation

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.5...v2.2.6

deepmd-kit - v2.2.5

Published by wanghan-iapcm about 1 year ago

New features

Merge cuda and rocm code

Enhancement

Documentation

Build and release

Bug fixing

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.4...v2.2.5

deepmd-kit - v2.2.4

Published by wanghan-iapcm about 1 year ago

Breaking changes

New features

Enhancement

Bug fixings

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.3...v2.2.4

deepmd-kit - v2.2.3

Published by wanghan-iapcm about 1 year ago

Breaking changes

New features

Enhancement

Documentation

Build and release

Bug fixings

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.2...v2.2.3

deepmd-kit - v2.2.2

Published by amcadmus over 1 year ago

New features

C and header only C++

Build and release

Enhancements

Bug fixings

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.1...v2.2.2

deepmd-kit - v2.2.1

Published by wanghan-iapcm over 1 year ago

New features

Enhancement

CICD

Bug fixings

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.2.0...v2.2.1

deepmd-kit - v2.2.0

Published by amcadmus over 1 year ago

New features

Enhancements

Python

Core

C++

OP

LAMMPS

Build and release

Test

Code cleanup

Documents

Important bug fixings:

Bug fixings

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.1.5...v2.2.0

deepmd-kit - v2.2.0-beta.0

Published by amcadmus almost 2 years ago

New features

Enhancements

Python

Core

C++

OP

LAMMPS

Build and release

Test

Code cleanup

Documents

Bug fixings

New Contributors

Full Changelog: https://github.com/deepmodeling/deepmd-kit/compare/v2.1.5...v2.2.0.b0

deepmd-kit - v2.1.5

Published by wanghan-iapcm about 2 years ago

New features:

  • Attention based model DPA-1 (#1866 #1886 #1923 #1943)

Enhancements:

  • remove white space from train_attr/training_script (#1870)
  • replace FastGFile with GFile. FastGFile throws a deprecated warning. (#1874)
  • docs: make warnings more prominent (#1879)
  • add examples to cli help (#1887)
  • remove get_platform in setup.py (#1897)
  • bump to C++ 17 for TF 2.10 (#1898)
  • compare converted lr with almost equal (#1901)
  • add tutorial and publication links to docs (#1904)
  • LAMMPS: Use of “override” instead of “virtual” (#1915)
  • highlight LAMMPS codes in doc (#1921)

Bug fixings:

  • support initilize parameters from a fitting with suffix (#1885)
  • fix DeprecationWarning of imp (#1896)
  • find protobuf headers from extra paths (#1910)
  • Fix bugs when init_frz_model using tebd. (#1891)
  • Use GeluCustom as operator name (#1918)
  • handle float point error in sys_probs (#1919)
  • fix model conversion of 0.12 (#1941)
deepmd-kit - v2.1.4

Published by wanghan-iapcm about 2 years ago

Enhancements:

  • add core api docs (#1800)
  • add op docs (#1804)
  • There's no need for building libtensorflow_cc.so anymore (#1744)
  • improve conda installation in the documentation (#1808)
  • merge CMake test codes via add_subdirectory (#1814)
  • avoid multiple sessions in DeepEval (#1829)
  • bump default lammps version to stable_23Jun2022_update1 (#1847)
  • error if LAMMPS_VERSION_NUMBER is not defined (#1849)
  • add variant info to output message (#1851)
  • generate author list from git (#1854)
  • lammps plugin: replace v2.0 with actual version (#1863)

Bug fixings:

  • fix OMP bugs in prod_force and prod_virial triggered when the cell is smaller than 2rc. (revert prod_force OMP in #1360) (#1862)
  • fix typo in hip assert error message (#1802)
  • fix memory leaking of GraphDef (#1811)
  • add the missing sstream header (#1817)
  • fix grappler compilation error with TF 1.15 (#1821)
  • docs: fix shape of virial (#1824)
  • fix build and running issues on Windows (#1830)
  • comment unused session in DPTabulate (#1834)
  • fix deprecated bare pair_coeff (#1838)
deepmd-kit - v2.1.3

Published by amcadmus over 2 years ago

New features:

  • Non-von-Neumann training of DP models. (#1707)

Enhancements

  • remove dependency of TF headers from C++ public headers (#1789)
  • use lru_cache for DeepEval (#1790)
  • support custom gelu implementation (#1795)
  • support optional gitee gtest download (#1793)

Bug fixings:

  • bump manylinux image to 2_24; add error message when TF_CXX11_ABI_FLAG is 1 (#1796)
deepmd-kit - v2.1.2

Published by amcadmus over 2 years ago

New features:

  • supports dp convert-from 0.12 (#1685)
  • add enable_atom_ener_coeff option for energy loss (#1743)

Enhancements:

  • change default NN precision from float64 to default (#1644)
  • update TF installation doc (#1652)
  • migrate test_cc from conda to docker (#1650)
  • use float constants and functions in float functions (#1647)
  • convert tabulate data from np.ndarray to tf.Tensor (#1657)
  • reset the graph before freezing the compressed model (#1658)
  • add free_energy to ase calculator (#1667)
  • rewrite data doc (#1668)
  • migrate sphinx mathjax from jsdelivr to cdnjs (#1669)
  • Documentation improvements (#1673)
  • doc: add information abotu supported versions of dependencies (#1683)
  • doc: add Interfaces out of DeePMD-kit (#1691)
  • optimize format_nlist_i_cpu (#1717)
  • use net-wise tabulate range (#1665)
  • implement parallelism for neighbor stat (#1624)
  • render equations in markdown files (#1721)
  • update the latest state of easy installation (#1726)
  • throw warning in C++ if env is not set (#1728)
  • in model_devi, assumes nopbc if box is set to None (#1704)
  • add Loss abstract class (#1733)
  • prevent from linking TF lib when determining TF version (#1734)
  • Automatically label new pull requests based on the paths of files being changed (#1738)
  • replace GPU 1./sqrt with rsqrt (#1741)
  • add DPRc docs (#1750)
  • docs: switch to dargs directive (#1753)
  • docs: fix emoji in PDF (#1754)
  • add a script to build TF C++ library from source (#1755)
  • add auto cli docs (#1751)
  • search TF from user site-packages (#1764)
  • build_tf.py: expose CC and CXX env to bazel (#1766)
  • docs: add links to parameter keys (#1767)
  • add argument tests to check examples (#1770)
  • reduce training steps in tests (#1771)
  • deprecated docstring_parameter; use sphinx rst_epilog instead (#1783)
  • remove run_doxygen from sphinx conf.py (#1785)
  • bump LAMMPS version to stable_23Jun2022 (#1779)

Bug fixings:

  • fix variable declaration error (#1651)
  • fix bug of aparam size, should be nlocal_real (#1664)
  • fix rcut in hybrid model compression (#1663)
  • provide valid_data the same type_map as train_data (#1677)
  • deepmodeling.org -> deepmodeling.com (#1678)
  • fix compress training (#1680)
  • fix bug of model compression training with se_e2_r type descriptor (#1686)
  • fix typos in doc (#1687)
  • fix grappler compilation error with TF 1.15 ~ 2.6 (#1697)
  • set default fparam and aparam stat and recover from graph (#1695)
  • fix git permission issue (#1716)
  • fix tf_cxx_abi in TF 2.9 (#1723)
  • correct type behavior when atomic energy is requested (#1727)
  • prevent explicit slash in the path (#1713)
  • avoid static CUDA linking (#1731)
  • fix finding TF 2.9 ABI (#1736)
  • using int64 within the memory allocation operations (#1737)
  • fix typos in docs and docstrings (#1752)
  • set a proper std when there is no atoms in the data (#1765)
  • bump manylinux image to 2014 (#1780)
  • add init.py to deepmd/train/ (#1784)
  • docs: fix arg reference (#1786)
deepmd-kit - v2.1.1

Published by amcadmus over 2 years ago

New features:

  • support type_one_side along with exclude_types (#1423)
  • support adjust sel of frozen models (#1573 #1574 )
  • support dp convert-from 1.1 (#1587)
  • support dp convert-from 1.0 (#1597)
  • add atom energy bias to type embedding energy (#1592 #1606 )
  • add another way to load LAMMPS plugins (#1604)

Enhancement:

  • add the deepmodeling banner to doc (#1529)
  • bump default LAMMPS version to stable_29Sep2021_update3 (#1596)
  • compile CUDA code for all archs (#1595 #1598 )
  • follow API changes from latest LAMMPS (#1601)
  • add kspace pppm/dplr to lmp plugin library (#1603)
  • add a graph optimizer to parallelize prod_force_a_cpu (#1429 #1638 )
  • refactor init_variable and support type embedding (#1610)
  • optimize dplr data modifier (#1615)
  • add system names to model devi header (#1618)
  • add tips for easy installation (#1634)
  • add the order of box.raw in data-conv.md (#1635)

Bug fixings:

  • fix the name of deeptensor/atom and dplr plugin (#1548)
  • fix macos library name (#1566)
  • fix model compression bug of nan output (#1575)
  • fix lammps plugin creator pointer (#1602)
  • fix the bug introduced by lammps PR #2481 (#1605)
  • update compress cli input file (#1633)
  • correct the forward communication at ik differentiation mode in pppm_dplr (#1637)
  • Fix compilation error and bug in UT in the ROCm environment (#1628)
deepmd-kit - v2.1.0

Published by amcadmus over 2 years ago

New feature:

  • Model compression for se_3, se_r descriptors. Energy and tensor models (#1225 #1228 #1361 )
  • Add init-frz-model support for se-t type descriptor (#1245)
  • Added all activation functions for model compression. (#1283)
  • Update guidelines for the number of threads (#1291)
  • Enable mixed precision support for deepmd-kit (#1285 #1471 )
  • Unify C++ errors and pass message to LAMMPS (#1326)
  • Optimize DPTabulate._build_lower method (#1323)
  • Calculate neighbor statistics from CLI (#1476)
  • Add an interface to eval descriptors (#1483)

Enhancement:

  • deprecate numb_test in the training script (#1249)
  • Accelerate model compression (#1274)
  • Use c++14 for TF 2.7 (#1275)
  • Add a citation badge (#1280)
  • Add embedding network dimension check of model compression (#1303)
  • Provide an option to skip neighbor stat (#1313)
  • Add an error message to compress/freeze (#1319)
  • Redirect print_summary to LAMMPS log (#1324)
  • Enable OpenMP for prod_force and prod_virial (#1360)
  • Update issue templates (#1368)
  • Bump LAMMPS version to stable_29Sep2021_update2 (#1279)
  • Remove api_cc/include/custom_op.h (#1405)
  • Introduce TensorFlow Profiler (#1414)
  • Only test/eval fitting properties during training (#1416 #1419 )
  • Remove the dependency on inputs from inputs_zero (#1417)
  • Support recursive detection for the systems of model_devi (#1424)
  • Enable TF remapper optimizer (#1418)
  • Dynamically load op library in C++ interface (#1384)
  • Dplr doc and examples (#1458)
  • Bump the Python version to 3.10 (#1465)
  • Do some small optimization to ops (avoid concat or add in loops. Instead, append tensors to a list, and concat or accumulate_n after loops) (#943)
  • Optimizations related to data statistics
    • Skip data_stat in init_from_model and restart mode (#1463)
    • Assign energy shift stats if atomic energies are assigned (#1477)
    • Recover input stats from frozen models (#1482)
  • Test: move loading graphs to setUpClass to accelerate tests (#1484)
  • Run test_python in the pre-built container (#1487)

Bug fixings:

  • Update and fix typos in doc (#1238 #1239 #1328 #1300 #1445 #1490 #1497 #1504 #1503 #1514 )
  • Fix compress training bug within the dp train --init-frz-model interface (#1233)
  • Fix Python bugs of loc_frame descriptor (#1253)
  • Fix bug of loc_frame descriptor when using lammps (#1255)
  • Fix single precision error (#1212)
  • Fix the np.frombuffer in dp transfer (#1246)
  • Fix SyntaxWarning in graph.py (#1278)
  • Change googletest from master to main (#1292)
  • update_deepmd_input when compress (#1297 #1301 )
  • Add importlib_metadata as dependency (#1308)
  • Fix bugs about parameters of memset (#1302)
  • Fix model compression bug when fparam or aparam is not zero (#1306)
  • Add space between words in messages (#1312)
  • Do not print virial error with nopbc data (#1314)
  • Fix test errors with TensorFlow 2.7 (#1315)
  • Fix bug of hip model compression (#1325)
  • Prevents rcut_smth larger than rcut (#1354)
  • Fix cell and virial transpose bug in dp_ipi (#1353)
  • Fix bug in DipoleFittingSeA: (#1363)
  • Fix cxx standard for LAMMPS (#1379)
  • Explicitly set neighbor request to full in compute deeptensor/atom to fix bug #1381 (#1382)
  • Fix NameError (#1385)
  • Fix network precision under specific situation (#1391 #1394 )
  • Initialize input virial vector to zero (#1397)
  • Make OpenMP an optional dependency (#1498)
  • Fix nvcc warning when using cuda-11.x toolkit (#1401)
  • Add UT for se_3 type descriptor (#1404)
  • Fix github git url (#1409)
  • Fix gelu grad multi definitions error (#1406)
  • Fix cast_precision breaking docstring (#1437)
  • Add image link of ROCm version. (#1432)
  • Pass integer zero to memset (#1499)

Manual (PDF·Epub)

deepmd-kit - v2.0.3

Published by amcadmus about 3 years ago

Enhancements

  • bump default LAMMPS version to stable_29Sep2021 (#1176)
  • improved documentation (#1184 #1191)

Bug fixings

  • add start_pref_pf and limit_pref_pf to loss Argument (#1200)
  • Fix the bug when type_map has only one element (#1196)
  • failure of hybrid descriptor (#1214)
  • fix single precision error in the model compression (#1212)

Download manual

Package Rankings
Top 24.13% on Conda-forge.org
Top 5.32% on Pypi.org
Top 17.23% on Npmjs.org
Badges
Extracted from project README
GitHub release offline packages conda-forge pip install docker pull Documentation Status doi:10.1016/j.cpc.2018.03.016 Citations doi:10.1063/5.0155600 Citations