DEEP LEARNING / A.I.
Caffe is a deep learning framework made with expression, speed, and modularity in mind. Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning into the hands of engineers and data scientists. DIGITS can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.
TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization to conduct machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
Theano is a numerical computation library for Python. In Theano, computations are expressed using a NumPy-esque syntax and compiled to run efficiently on either CPU or GPU architectures. Theano is an open source project primarily developed by a machine learning group at the Université de Montréal.
Chainer is a Python based, standalone open source framework for deep learning models. Chainer provides a flexible, intuitive, and high performance means of implementing a full range of deep learning models, including state-of-the-art models such as recurrent neural networks and variational autoencoders.
Clusterone makes it simple and fast to run deep learning workloads of any scale and complexity on any infrastructure. Putting data scientists first. It removes the time sink of infrastructure management and setup for data science teams by providing a ready-to-use platform and essential tools. For organizations, it reduces project costs by maximizing the efficiency of their data science teams and hardware resources.
Keras is a high-level neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation.
oSense is a facial-based system for people counting and classifies them by age group and gender. It also monitors the time they spend in a particular spot and is able to detect new and repeat visitors.
Pylearn2 is a machine learning library. Most of its functionality is built on top of Theano. This means you can write Pylearn2 plugins (new models, algorithms, etc.) using mathematical expressions, and Theano will optimize and stabilize those expressions for you, and compile them to a backend of your choice (CPU or GPU).
Tensors and Dynamic neural networks in Python with strong GPU acceleration.
Horovod is a distributed training framework for TensorFlow, Keras, and PyTorch. The goal of Horovod is to make distributed Deep Learning fast and easy to use.
Platform/Top Layer for Kubernetes and Horovod
A free, easy-to-use, open-source, commercial-grade toolkit that trains deep learning algorithms to learn like the human brain. Formerly CTNK.
Deeplearning4j aims to be cutting-edge plug and play, more convention than configuration, which allows for fast prototyping for data scientists, machine-learning practitioners and software engineers. DL4J is customizable at scale.
ND4J is an open-source project targetting professional Java developers familiar with production deployments, an IDE like IntelliJ and an automated build tool such as Apache Maven. Our tool will serve you best if you have those tools under your belt already.
Apache SINGA is an Apache Incubating project for developing an open source machine learning library. It provides a flexible architecture for scalable distributed training, is extensible to run over a wide range of hardware, and has a focus on health-care applications.
OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez (which was later acquired by Intel). The library is cross-platform and free for use under the open-source BSD license.
DeepChem is a python library that provides a high quality open-source toolchain for deep-learning in drug discovery, materials science, quantum chemistry, and biology.
BIDMach is a very fast machine learning library. The github distribution contains source code only. You also need a jdk 8, an installation of NVIDIA CUDA 8.0 (if you want to use a GPU) and CUDNN 5 if you plan to use deep networks. For building you need maven 3.X.
H2O.ai is a open source software company and makers of H2O, an open source data science and machine learning platform used by many Fortune 500 companies, over 14,000 organizations, and hundreds of thousands of data scientists around the world.
Fast.ai is a library that simplifies training fast and accurate neural nets using modern best practices. It is based on research in deep learning best practices undertaken at fast.ai. The library includes "out of the box" support for vision, text, tabular, and collaborative filtering models.
Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. You only look once (YOLO) is a state-of-the-art, real-time object detection system coded in Darknet.
The RAPIDS suite of open source software libraries allows execution of end-to-end data science and analytics pipelines on GPUs. It relies on NVIDIA CUDA primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.