Deep LearningStarting with Deep Learning Toolbox, there are three new features to get excited about in 20a.
- Experiment Manager (new) - A new app that keeps track all conditions when training neural networks. This can be extremely helpful to keep track of all training parameters, data and accuracy of each iteration of the network. More to come on this feature in future posts!
- Deep Network Designer (updates) - Generate MATLAB code from the app, and train networks directly in the app.
- Post-Training Quantization (new) - This new video describes the quantization workflow in MATLAB.
- Train Conditional GANs
- Train Image Captioning Networks using Attention
- Multilabel Text Classification Using Deep Learning: This example shows how to classify text data that has multiple independent labels.
- Compare Layer Weight Initializers: This example shows how to train deep learning networks with different weight initializers.
- Train Network with Multiple Outputs: This example shows how to train a deep learning network with multiple outputs that predict both labels and angles of rotations of handwritten digits.
- Support for quite a few new networks, including SSD, bidirectional & stateful LSTMs, DarkNet, Inception-ResNet-v2, NASNet-Large, and NASNet-Mobile
- New examples include:
- Code Generation for Object Detection by Using Single Shot Multibox Detector - This shows the code generation workflow for SSD network targeting cuDNN
- Code Generation for a Sequence-to-Sequence LSTM Network - This is an updated example which shows code generation for Stateful LSTM
- Support for new networks including:
- LSTM for ARM CPUs
- DarkNet-19, DarkNet-53, DenseNet-201, Inception-ResNet-v2, NASNet-Large, NASNet-Mobile, ResNet-18, and Xception for Intel & ARM CPUs
Signal and Audio
- New standalone Signal Labeler app to label signal datasets. You can see it in action in the example Label Signal Attributes, Regions of Interest, and Points
- New signalDatastore to train networks with large collections of signals or signal features across memory and disk. See how it can be used with .MAT files in the example Waveform Segmentation Using Deep Learning
- Many additional signal processing and wavelet functions for feature extraction and time-frequency transformation like spectrogram, stft, or cwt now support gpuArrays for GPU acceleration, tall arrays for operating out of memory, and automatic CUDA code generation for running embedded GPUs. See for example GPU Acceleration of Scalograms for Deep Learning
- Additional new deep learning examples including Iterative Approach for Creating Labeled Signal Sets with Reduced Human Effort, which uses a train-as-you-label iterative method for deep learning classifier training
- New example showing how to train and evaluate GANs for generating synthetic audio. This highlights the recently released API in Deep Learning Toolbox, which includes custom training loops
- New example discussing the use of I-vectors for Speaker Verification. I-vectors are a very popular modern feature often used on audio signals. They are used with deep networks as well as with more traditional machine learning algorithms in lightweight embedded systems
- New detectSpeech function to automatically detect and annotate regions of speech in audio recordings
- New text2speech function to generate pre-labeled synthetic speech data using web services, including Google's very popular Wavenet
Image ProcessingThere’s a new style transfer demo available in Image Processing Toolbox. This demo will walk through the entire process of creating a network designed to take an image and transform it into the style of a reference image. Now you can create images in the style of Picasso, van Gogh, or your favorite artist. The incorporation of custom training loops (Advanced Deep Learning: Key Terms) makes techniques like style transfer relatively intuitive to implement. For Computer Vision, there is a new example describing how to create a single shot detector (SSD).
Reinforcement Learning20a release of Reinforcement Learning Toolbox comes with a new agent, Twin Delayed Deep Deterministic Policy Gradient (TD3), additional support for continuous action spaces from existing agents (Policy Gradient, Actor Critic and Proximal Policy Optimization agent) and new examples that showcase how to build custom training algorithms and imitation learning.
- Train DDPG Agent with Pretrained Actor Network Reinforcement learning is a data hungry technique that requires many simulations for training. This example shows how to reduce training time, by initializing the neural network policy using existing data and supervised learning.
- Train Reinforcement Learning Policy Using Custom Training Loop While Reinforcement Learning Toolbox includes a variety of popular algorithms to train your system, you may want to customize these algorithms or create your own. This example shows the steps you need to follow to create a custom training algorithm with Reinforcement Learning Toolbox.
Radar & Comms
20a release is exciting for the Radar/Comms area primarily because we have 4 new shipping examples. Here are the latest examples and features available in 20a: RF Fingerprinting:
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.