graphlearn-for-pytorch

A GPU-accelerated graph learning library for PyTorch, facilitating the scaling of GNN training and inference.

APACHE-2.0 License

Downloads
930
Stars
113

Bot releases are visible (Hide)

graphlearn-for-pytorch - Release v0.2.3 Latest Release

Published by Zhanghyi 2 months ago

We are thrilled to announce the release of GraphLearn for PyTorch v0.2.3. This update includes some enhancements focusing on:

  • Distributed support for vineyard as an integration with GraphScope.
  • Optimizations such as graph caching, and some experimental features including support for bf16 precision and all-to-all communication.
  • Some bug fixes.

What's Changed

Full Changelog: https://github.com/alibaba/graphlearn-for-pytorch/compare/v0.2.2...v0.2.3

graphlearn-for-pytorch - Release v0.2.2

Published by LiSu 9 months ago

We're excited to announce the release of GraphLearn for PyTorch v0.2.2. This update brings numerous fixes and features enhancing the framework's functionality, performance, and user experience. We extend our gratitude to all contributors who have made this release possible.

What's Changed

New Contributors

Full Changelog: https://github.com/alibaba/graphlearn-for-pytorch/compare/v0.2.1...v0.2.2

graphlearn-for-pytorch - v0.2.1

Published by LiSu about 1 year ago

We are delighted to bring a number of improvements to GLT, alongside the 0.2.0 release. This release contains many new features, improvements/bug fixes and examples, which are summarized as follows:

  1. Add support for single-node and distributed inbound sampling, provide users options of both inbound and outbound sampling.
  2. Add chunk partitioning when partitioning graphs with large feature files, reduce the memory consumption of feature partitioning.
  3. Add examples for the IGBH dataset
  4. Fix bugs and improve system stability

What's Changed

New Contributors

Full Changelog: https://github.com/alibaba/graphlearn-for-pytorch/compare/v0.2.0...v0.2.1

graphlearn-for-pytorch - v0.2.0

Published by baoleai over 1 year ago

We are pleased to announce the first open release of GLT v0.2.0!

  • GLT provides both CPU-based and GPU-based graph operators, including neighbor sampling, negative sampling, and feature lookup. The GPU-based graph operations significantly accelerate computation and reduce data movement, making it suitable for GPU training.
  • For distributed training, GLT implements multi-processing asynchronous sampling, pin memory buffer, hot feature cache, and utilizes fast networking technologies (PyTorch RPC with RDMA support) to speed up distributed sampling and reduce communication.
  • GLT is also easy to use, with most of its APIs compatible with PyG/PyTorch and complete documentation and usage examples available, GLT focuses on real-world scenarios and provides distributed GNN training examples on large-scale graphs.
Package Rankings
Top 18.73% on Pypi.org
Badges
Extracted from project README
GLT-pypi docs GLT CI License
Related Projects