DeepLearning.scala is a simple library for creating complex neural networks from object-oriented and functional programming constructs.
map
, reduce
or other higher order functions.Like other deep learning toolkits, DeepLearning.scala allows you to build neural networks from mathematical formulas. It supports floats, doubles, GPU-accelerated N-dimensional arrays, and calculates derivatives of the weights in the formulas.
Unlike some other deep learning toolkits, the structure of neural networks in DeepLearning.scala is dynamically determined during running. Our neural networks are programs. All Scala features, including functions, expressions and control flows, are available in neural networks.
For example:
def ordinaryScalaFunction(a: INDArray): Boolean = {
a.signnum.sumT > math.random
}
def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer(monadic[Do] {
val outputOfLayer1 = layer1(input).forward.each
if (ordinaryScalaFunction(outputOfLayer1.data)) {
dynamicallySelectedLayer2(outputOfLayer1).forward.each
} else {
dynamicallySelectedLayer3(outputOfLayer1).forward.each
}
})
The above neural network will go into different subnetworks according to an ordinary Scala function.
With the ability of creating dynamic neural networks, regular programmers are able to build complex neural networks from simple code. You write code almost as usual, the only difference being that code based on DeepLearning.scala is differentiable, which enables such code to evolve by modifying its parameters continuously.
DeepLearning.scala 2.0 is based on Monads, which are composable, thus a complex layer can be built from primitive operators or higher order functions like map
/reduce
. Along with the Monad, we provide an Applicative type class, to perform multiple calculations in parallel.
For example, the previous example can be rewritten in higher-order function style as following:
def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer {
layer1(input).forward.flatMap { outputOfLayer1 =>
if (ordinaryScalaFunction(outputOfLayer1.data)) {
dynamicallySelectedLayer2(outputOfLayer1).forward
} else {
dynamicallySelectedLayer3(outputOfLayer1).forward
}
}
}
The key construct in DeepLearning.scala 2.0 is the dependent type class DeepLearning, which witnesses a differentiable expression. In other words, given the DeepLearning
type class instance, you can activate the deep learning ability of any type.
The code base of DeepLearning.scala 2.0 is organized according to Dependent Object Type calculus (DOT). All features are provided as mixin-able plugins. A plugin is able to change APIs and behaviors of all DeepLearning.scala types. This approach not only resolves expression problem, but also gives plugins the additional ability of virtually depending on other plugins.
For example, when a plugin author is creating the Adagrad optimizer plugin, he does not have to explicitly call functions related to learning rate. However, once a plugin user enables both the Adagrad
plugin and the FixedLearningRate plugin, then computation in FixedLearningRate
will get called eventually when the Adagrad
optimization is executed.
Version 2.0 is the current version with all of the above features.
map
/reduce
and other higher-order functions on GPU.DeepLearning.scala is sponsored by ThoughtWorks.
DeepLearning.scala is heavily inspired by my colleague @MarisaKirisame. Originally, we worked together on a prototype of a deep learning framework, and eventually split our work into this project and DeepDarkFantasy. Other contributors can be found at here.
async
/await
-like syntax. You may want to use it to control your training process in an imperative style.@typeclass
annotation.