Cross-architecture parallel algorithms for Julia's GPU backends, from a unified KernelAbstractions.jl codebase. Targets Intel oneAPI, AMD ROCm, Apple Metal, Nvidia CUDA.
MIT License
Bot releases are hidden (Show)
Published by anicusan about 1 month ago
First release of AcceleratedKernels.jl, for archiving purposes supporting the "AcceleratedKernels.jl: Cross-Architecture Parallel Algorithms from a Unified, Transpiled Codebase" paper.
Performance and data profiles
Solving differential equations in parallel on GPUs - JuliaCon 2021 workshop
Differentiable RayTracing in Julia
CUDA programming in Julia.
A generic, simple and fast implementation of Deepmind's AlphaZero algorithm.
AMD GPU (ROCm) programming in Julia
A tool for converting specific Julia GPU code writen in CUDA.jl, into abstract multi-backend code...
CPU/GPU parallel performance portable layer in Julia via functions as arguments
Metal programming in Julia
Julia support for the oneAPI programming toolkit.
Julia library for visualization and annotation medical images, specialized particularly for rapid...
⅀
A benchmarking framework for the Julia language
🌊 Julia software for fast, friendly, flexible, ocean-flavored fluid dynamics on CPUs and GPUs
Fortran-Julia syntax comparison and Maxwell Solver in 2D using Yee numerical scheme and MPI topology