At Impressive Machines we are eagerly awaiting the release of Nvidia’s GTX 1080 graphics card. This new GPU is benchmarked at around 8 TFLOPs. Inspired by Nvidia’s developer box, we are in the process of putting together our own box for deep learning research. This is a four GPU PC with a six core Intel Core i7, an x99 motherboard, and 32GB RAM. We have been able to reduce the cost compared with purchasing such a box from Nvidia. We are going to be running Linux, cuDNN, and TensorFlow.
At the moment I’m writing an integer-based library to bring neural networks to microcontrollers. This is intended to support the ARM and AVR devices. The idea here is that even though we might think of neural networks as the domain of super computers, for small scale robots we can do a lot of interesting things with smaller neural networks. For example a four layer convolutional neural network with about 18,000 parameters can process a 32×32 video frame at 8 frames per second on the ATmega328, according to code that I implemented last year.
For small networks, there can be some on-line learning, which might be useful to learn control systems with a few inputs and outputs, connecting for example IMU axes or simple sensors to servos or motors, trained with deep reinforcement learning. This is the scenario that I’m experimenting with and trying to enable for small, low power, and cheap interactive robots and toys.
For more complex processing where insufficient RAM is available to store weights, a fixed network can be stored in ROM built from weights that have been trained off line using python code.
Anyway watch this space because I’m currently working on this library and intend to make it open source through my company Impressive Machines.
In recent years the concept of deep learning has been gaining widespread attention. The media frequently reports on talent acquisitions in this field, such as those by Google and Facebook, and startups which claim to employ deep learning are met with enthusiasm. Gratuitous comparisons with the human brain are frequent. But is this just a trendy buzz word? What exactly is deep learning and how is it relevant to developments in machine intelligence?
For many researchers, deep learning is simply a continuation of the multi-decade advancement in our ability to make use of large scale neural networks. Let’s first take a quick tour of the problems that neural networks and related technologies are trying to solve, and later we will examine the deep learning architectures in greater detail.
Machine learning generally breaks down into two application areas which are closely related: classification and regression.
In the classification task, you are trying to do automatic recognition. You create a training data set for which you have known labels, for example, images of different types of vegetables. Here you have manually assigned the correct class label, such as yam, carrot, potato, etc, to each one. The images are going to be the input to the algorithm and the class labels are going to be the required output.
Impressive Machines was recently in the local Seattle news. The Seattle Police Department held a hackathon to discuss the problem of redaction of personally identifying material from police videos that are released to the public.
There has been an uptick in the requests for police car and body camera vides in various jurisdictions. Often there is a reason to hide the identities of bystanders, informants, or victims of a crime, and this has traditionally been done by hand, manually using a tool to blur regions of a video frame by frame.
There is a great need for more automation in this area. We are investigating the possibility for creating an application that can assist a person in the redaction process using tools like face recognition and motion tracking. At the hackathon, Impressive Machines gave a presentation about some of the necessary technology.