[PYTHON] Now, let's try face recognition with Chainer (learning phase)

Overview

The end of development of Chainer has been announced, and face recognition is still full of feelings. I will spell it here as a memorandum.

This series will be sent in two parts. This time, I will explain how to implement the learning phase (learning of face images). Next time, I will explain the implementation of Prediction Phase (face recognition using camera).

In addition, since I was a ML beginner and a high school student at that time, there may be some errors in the information or there may be a bug in the program. If there is such a thing, I would be grateful if you could point it out in the comments. (Article creation date: 2020/2/9)

environment

-Software- Windows 10 Home Anaconda3 64-bit(Python3.7) Spyder -Library- Chainer 7.0.0 -Hardware- CPU: Intel core i9 9900K GPU: NVIDIA GeForce GTX1080ti RAM: 16GB 3200MHz

reference

** Books ** CQ Publishing Deep Learning Starting with Arithmetic & Raspberry Pi ([Amazon Page](https://www.amazon.co.jp/%E7%AE%97%E6%95%B0-%E3%83%A9%E3%82%BA%E3%83%91% E3% 82% A4% E3% 81% 8B% E3% 82% 89% E5% A7% 8B% E3% 82% 81% E3% 82% 8B-% E3% 83% 87% E3% 82% A3% E3 % 83% BC% E3% 83% 97% E3% 83% BB% E3% 83% A9% E3% 83% BC% E3% 83% 8B% E3% 83% B3% E3% 82% B0-% E3% 83% 9C% E3% 83% BC% E3% 83% 89% E3% 83% BB% E3% 82% B3% E3% 83% B3% E3% 83% 94% E3% 83% A5% E3% 83% BC% E3% 82% BF% E3% 83% BB% E3% 82% B7% E3% 83% AA% E3% 83% BC% E3% 82% BA-% E7% 89% A7% E9% 87% 8E / dp / 4789847063))) site Chainer API Reference

program

For the time being, I will post it on Github. https://github.com/himazin331/Face-Recognition-Chainer- The repository contains a learning phase, a prediction phase, a data processing program, and Haar-Cascade.

Premise

** Anaconda3 must be installed ** for program operation. Please refer to the following for how to download and install Anaconda3. Anaconda3 download site Anaconda3 installation method (Windows)

Also, if you like here posted by my friend, please refer to it.

After installing Anaconda3, at Anaconda3 prompt, pip install chainer Please enter and install Chainer.

About learning data

In this program, ** the training data is a grayscale image and is a 32x32px JPEG file **. It is implemented on the premise. Please use here for data processing.

Source code

** Please note that the code is dirty ... **

face_recog_train_CH.py



import argparse as arg
import os
import sys

import chainer
import chainer.functions as F
import chainer.links as L 
from chainer import training
from chainer.training import extensions

#Definition of CNN
class CNN(chainer.Chain):
    
    #Definition of each layer
    def __init__(self, n_out):
        super(CNN, self).__init__(
            #Definition of convolution layer
            conv1 = L.Convolution2D(1, 16, 5, 1, 0),  # 1st
            conv2 = L.Convolution2D(16, 32, 5, 1, 0), # 2nd
            conv3 = L.Convolution2D(32, 64, 5, 1, 0), # 3rd

            #Linear combination of all neurons
            link = L.Linear(None, 1024), #Fully connected layer
            link_class = L.Linear(None, n_out), #Fully connected layer for classification(n_out:Number of classes)
        )
        
    #Forward propagation
    def __call__(self, x):
        
        #Convolution layer->ReLU function->Maximum pooling layer
        h1 = F.max_pooling_2d(F.relu(self.conv1(x)), ksize=2)   # 1st
        h2 = F.max_pooling_2d(F.relu(self.conv2(h1)), ksize=2)  # 2nd
        h3 = F.relu(self.conv3(h2))  # 3rd
        
        #Fully connected layer->ReLU function
        h4 = F.relu(self.link(h3))
        
        #Predicted value return
        return self.link_class(h4) #Fully connected layer for classification
 
# Trainer
class trainer(object):
    
    #Model building,Optimization method setup
    def __init__(self):
        
        #Model building
        self.model = L.Classifier(CNN(2))
        
        #Optimized method setup
        self.optimizer = chainer.optimizers.Adam() #Adam algorithm
        self.optimizer.setup(self.model) #Set the model in optimizer
        
    #Learning
    def train(self, train_set, batch_size, epoch, gpu, out_path):

        #Corresponding to GPU processing
        if gpu >= 0:
            chainer.cuda.get_device(gpu).use() #Get device object
            self.model.to_gpu()  #Copy the contents of the instance to the GPU
        
        #Creating a dataset iterator(Definition of iterative processing of training data,Shuffle every loop)
        train_iter = chainer.iterators.SerialIterator(train_set, batch_size)

        #Create updater
        updater = training.StandardUpdater(train_iter, self.optimizer, device=gpu)
        #Create trainer
        trainer = training.Trainer(updater, (epoch, 'epoch'), out=out_path)

        #extension settings
        #Schematicize the process flow
        trainer.extend(extensions.dump_graph('main/loss'))
        #Write snapshot for each learning
        trainer.extend(extensions.snapshot(), trigger=(epoch, 'epoch'))
        # log(JSON format)writing
        trainer.extend(extensions.LogReport())
        #Plot the loss value on the graph
        trainer.extend(
                extensions.PlotReport('main/loss', 'epoch', file_name='loss.png'))
        #Plot prediction accuracy on a graph
        trainer.extend(
                extensions.PlotReport('main/accuracy', 'epoch', file_name='accuracy.png'))
        #"Number of learnings" for each learning,Loss value,Prediction accuracy,Output "elapsed time"
        trainer.extend(extensions.PrintReport(
                ['epoch', 'main/loss', 'main/accuracy', 'elapsed_time']))
        #Progress bar display
        trainer.extend(extensions.ProgressBar())

        #Start learning
        trainer.run()

        print("___Training finished\n\n")
        
        #Make the model CPU compatible
        self.model.to_cpu()
    
        #Save parameters
        print("___Saving parameter...")
        param_name = os.path.join(out_path, "face_recog.model") #Learned parameter save destination
        chainer.serializers.save_npz(param_name, self.model) #Write trained parameters in NPZ format
        print("___Successfully completed\n\n")
    
#Data set creation
def create_dataset(data_dir):
    
    print("\n___Creating a dataset...")
    
    cnt = 0
    prc = ['/', '-', '\\', '|']
    
    #Number of image sets
    print("Number of Rough-Dataset: {}".format(len(os.listdir(data_dir))))
    #Number of image data
    for c in os.listdir(data_dir):
        d = os.path.join(data_dir, c)
        print("Number of image in a directory \"{}\": {}".format(c, len(os.listdir(d))))
    
    train = []  #Temporary dataset
    label = 0
    
    #Temporary dataset creation
    for c in os.listdir(data_dir):
        
        print('\nclass: {}, class id: {}'.format(c, label))   #Output class name and class ID
       
        d = os.path.join(data_dir, c)    #Combine folder name and class folder name
        imgs = os.listdir(d)    #Get all image files
        
        #Read only JPEG format image files
        for i in [f for f in imgs if ('jpg'or'JPG' in f)]:        
            
            #Through cache file
            if i == 'Thumbs.db':
                continue
            
            train.append([os.path.join(d, i), label])   #After combining the class folder path and image file name, store it in the list

            cnt += 1
            
            print("\r   Loading a images and labels...{}    ({} / {})".format(prc[cnt%4], cnt, len(os.listdir(d))), end='')
            
        print("\r   Loading a images and labels...Done    ({} / {})".format(cnt, len(os.listdir(d))), end='')
        
        label += 1
        cnt = 0

    train_set = chainer.datasets.LabeledImageDataset(train, '.')    #Data set
    
    print("\n___Successfully completed\n")
    
    return train_set
    
def main():

    #Command line options
    parser = arg.ArgumentParser(description='Face Recognition train Program(Chainer)')
    parser.add_argument('--data_dir', '-d', type=str, default=None,
                        help='Specifying the folder path(Error when not specified)')
    parser.add_argument('--out', '-o', type=str, 
                        default=os.path.dirname(os.path.abspath(__file__))+'/result'.replace('/', os.sep),
                        help='Specify the save destination of parameters(Default value./result)')
    parser.add_argument('--batch_size', '-b', type=int, default=32,
                        help='Specifying mini-batch size(Default value 32)')
    parser.add_argument('--epoch', '-e', type=int, default=15,
                        help='Specifying the number of learning(Default value 15)')
    parser.add_argument('--gpu', '-g', type=int, default=-1,
                        help='Specify GPU ID(Negative values indicate CPU processing,Default value-1)')
    args = parser.parse_args()

    #Folder not specified->exception
    if args.data_dir == None:
        print("\nException: Folder not specified.\n")
        sys.exit()
    #When specifying a folder that does not exist->exception
    if os.path.exists(args.data_dir) != True:
        print("\nException: Folder {} is not found.\n".format(args.data_dir))
        sys.exit()

    #Setting information output
    print("=== Setting information ===")
    print("# Images folder: {}".format(os.path.abspath(args.data_dir)))
    print("# Output folder: {}".format(args.out))
    print("# Minibatch-size: {}".format(args.batch_size))
    print("# Epoch: {}".format(args.epoch))
    print("===========================")

    #Data set creation
    train_set = create_dataset(args.data_dir)

    #Start learning
    print("___Start training...")
    Trainer = trainer()
    Trainer.train(train_set, args.batch_size, args.epoch, args.gpu, args.out)
   
if __name__ == '__main__':
    main()

Execution result

image.png

image.png After execution, the above file will be generated in the save destination.

command python face_recog_train_CH.py -d <folder> -e <number of learning> -b <batch size> (-o <save> -g <GPU ID>) The file is saved to ./result by default.

Description

I will explain the code. Unfortunately, the ability to explain is poor.

Network model

The network model this time is a convolutional neural network (CNN). The network model is defined in the CNN class.

CNN class


#Definition of CNN
class CNN(chainer.Chain):
    
    #Definition of each layer
    def __init__(self, n_out):
        super(CNN, self).__init__(
            #Definition of convolution layer
            conv1 = L.Convolution2D(1, 16, 5, 1, 0),  # 1st
            conv2 = L.Convolution2D(16, 32, 5, 1, 0), # 2nd
            conv3 = L.Convolution2D(32, 64, 5, 1, 0), # 3rd

            #Linear combination of all neurons
            link = L.Linear(None, 1024), #Fully connected layer
            link_class = L.Linear(None, n_out), #Fully connected layer for classification(n_out:Number of classes)
        )
        
    #Forward propagation
    def __call__(self, x):
        
        #Convolution layer->ReLU function->Maximum pooling layer
        h1 = F.max_pooling_2d(F.relu(self.conv1(x)), ksize=2)   # 1st
        h2 = F.max_pooling_2d(F.relu(self.conv2(h1)), ksize=2)  # 2nd
        h3 = F.relu(self.conv3(h2))  # 3rd
        
        #Fully connected layer->ReLU function
        h4 = F.relu(self.link(h3))
        
        #Predicted value return
        return self.link_class(h4) #Fully connected layer for classification

We are passing chainer.Chain as an argument to the CNN class. chainer.Chain is a Chainer-specific class and is the core of the network. When creating an instance, call the instance method __init__ and call the instance method of the superclass chainer.Chain to define the convolution layer and the fully connected layer.

The hyperparameters of the convolution layer in this program are shown in the table below.

Input channel Output channel Filter size Stride width Padding width
1st 1 16 5 1 0
2nd 16 32 5 1 0
3rd 32 64 5 1 0

Since it is assumed that the training data is a grayscale image, the number of input channels of the first convolution layer is set to 1. If it is an RGB image, it will be 3. "Padding width 0" means that no padding process is performed.

The hyperparameters of the fully connected layer are shown in the table below.

Number of input dimensions Number of output dimensions
Fully connected layer None 1024
For classification None 2

If you specify None for the number of input dimensions, the number of dimensions of the input data will be applied automatically.

This time, I'm going to do ** 2 classification **, so I set the number of output dimensions of the fully connected layer for ** classification to 2 **. When creating an instance of class CNN, by entering a numerical value in the argument, the class classification will correspond to that numerical value. (In the code, n_out means what class to classify.)

The other method __call__ is used for forward propagation. The overall structure is shown in the figure below. image.png The pooling layer is the maximum pooling ** with the pooling area ** 2 x 2.


Data set creation

First of all, there are some notes about datasets, so I will explain the functions that create datasets first. The dataset is created with the crate_dataset function.

create_dataset function


#Data set creation
def create_dataset(data_dir):
    
    print("\n___Creating a dataset...")
    
    cnt = 0
    prc = ['/', '-', '\\', '|']
    
    #Number of image sets
    print("Number of Rough-Dataset: {}".format(len(os.listdir(data_dir))))
    #Number of image data
    for c in os.listdir(data_dir):
        d = os.path.join(data_dir, c)
        print("Number of image in a directory \"{}\": {}".format(c, len(os.listdir(d))))
    
    train = []  #Temporary dataset
    label = 0
    
    #Temporary dataset creation
    for c in os.listdir(data_dir):
        
        print('\nclass: {}, class id: {}'.format(c, label))   #Output class name and class ID
       
        d = os.path.join(data_dir, c)    #Combine folder name and class folder name
        imgs = os.listdir(d)    #Get all image files
        
        #Read only JPEG format image files
        for i in [f for f in imgs if ('jpg'or'JPG' in f)]:        
            
            #Through cache file
            if i == 'Thumbs.db':
                continue
            
            train.append([os.path.join(d, i), label])   #After combining the class folder path and image file name, store it in the list

            cnt += 1
            
            print("\r   Loading a images and labels...{}    ({} / {})".format(prc[cnt%4], cnt, len(os.listdir(d))), end='')
            
        print("\r   Loading a images and labels...Done    ({} / {})".format(cnt, len(os.listdir(d))), end='')
        
        label += 1
        cnt = 0

    train_set = chainer.datasets.LabeledImageDataset(train, '.')    #Data set
    
    print("\n___Successfully completed\n")
    
    return train_set

The dataset in the classification problem requires training data and correct labels.

In this case, ** the learning data is a face image, and the correct label is the numerical value corresponding to that face **. For example, when there are incorrect and correct classes, The labels of the learning data in the incorrect answer class are collectively "0" The labels of the learning data in the correct answer class are collectively set as "1". Due to its nature, you have to pay attention to the structure of the folder.

** ** dataset.png

As above, in one folder (train_data), Create a folder (false, true) for each class and put the image data in it. By doing this, the correct label of the training data contained in false becomes 0, and the correct label contained in true becomes 1. In this example, the command option -d specifies train_data.

After doing something like the annotations in the code Finally, in the code below, a list of training data and labels, Formally create it as a dataset.

    train_set = chainer.datasets.LabeledImageDataset(train, '.')    #Data set

Learning

Set up and learn before machine learning in the trainer class.

trainer class(Instance method)


# Trainer
class trainer(object):
    
    #Model building,Optimization method setup
    def __init__(self):
        
        #Model building
        self.model = L.Classifier(CNN(2))
        
        #Optimized method setup
        self.optimizer = chainer.optimizers.Adam() #Adam algorithm
        self.optimizer.setup(self.model) #Set the model in optimizer

Call the instance method __init__ when instantiating to determine the network model construction and optimization algorithm. With self.model = L.Classifier (CNN (2)), ** classify into any number of classes by putting an arbitrary value in the parentheses of ** CNN (2) **.

After construction, the activation function and loss function are given by the Chainer.links method called L.Classifier (). The activation function here is an activation function used at the time of output, such as the softmax function. Since the ** activation function is set to the softmax function and the loss function is set to the cross entropy error ** by default, there is no problem just wrapping the network model in the case of a classification problem.

Next, after creating an instance of the optimization algorithm ** Adam ** with self.optimizer = chainer.optimizers.Adam (), Apply the network model with self.optimizer.setup (self.model).


In the train method in the trainer class, We will create a dataset iterator, updater, and trainer for training.

trainer class(train method)


    #Learning
    def train(self, train_set, batch_size, epoch, gpu, out_path):

        #Corresponding to GPU processing
        if gpu >= 0:
            chainer.cuda.get_device(gpu).use() #Get device object
            self.model.to_gpu()   #Copy the contents of the instance to the GPU

        #Creating a dataset iterator(Definition of iterative processing of training data,Shuffle every loop)
        train_iter = chainer.iterators.SerialIterator(train_set, batch_size)

        #Create updater
        updater = training.StandardUpdater(train_iter, self.optimizer, device=gpu)
        #Create trainer
        trainer = training.Trainer(updater, (epoch, 'epoch'), out=out_path)

        #extension settings
        #Schematicize the process flow
        trainer.extend(extensions.dump_graph('main/loss'))
        #Write snapshot for each learning
        trainer.extend(extensions.snapshot(), trigger=(epoch, 'epoch'))
        # log(JSON format)writing
        trainer.extend(extensions.LogReport())
        #Plot the loss value on the graph
        trainer.extend(
                extensions.PlotReport('main/loss', 'epoch', file_name='loss.png'))
        #Plot prediction accuracy on a graph
        trainer.extend(
                extensions.PlotReport('main/accuracy', 'epoch', file_name='accuracy.png'))
        #"Number of learnings" for each learning,Loss value,Prediction accuracy,Output "elapsed time"
        trainer.extend(extensions.PrintReport(
                ['epoch', 'main/loss', 'main/accuracy', 'elapsed_time']))
        #Progress bar display
        trainer.extend(extensions.ProgressBar())

        #Start learning
        trainer.run()

        print("___Training finished\n\n")

        #Make the model CPU compatible
        self.model.to_cpu()

        #Save parameters
        print("___Saving parameter...")
        param_name = os.path.join(out_path, "face_recog.model") #Learned parameter save destination
        chainer.serializers.save_npz(param_name, self.model) #Write trained parameters in NPZ format
        print("___Successfully completed\n\n")

The code below is ** creating a dataset iterator **.

        #Creating a dataset iterator(Definition of iterative processing of training data,Shuffle every loop)
        train_iter = chainer.iterators.SerialIterator(train_set, batch_size)

It ** creates shuffles and mini-batch of data order **. Specify the target dataset (train_set) and mini-batch size (batch_size) as arguments.


Next, ** create an updater **.

        #Create updater
        updater = training.StandardUpdater(train_iter, self.optimizer, device=gpu)

ʻUpdater ** updates the parameters **. As arguments, specify the dataset iterator (train_iter), the optimization algorithm (self.optimizer), and the GPU ID if necessary. The optimization algorithm is self.optimizer.setup (), which applies the optimization algorithm to the network model. Even if you specify chainer.optimizers.Adam ()` directly, it will not work.


Next, create a ** trainer **.

        #Create trainer
        trainer = training.Trainer(updater, (epoch, 'epoch'), out=out_path)

trainer ** implements a learning loop **. We will define what triggers (conditions) to end learning. It is usually triggered by the number of learnings epoch or iteration.

In this case, ** the number of learnings ʻepoch is set as the trigger **. As arguments, updater (ʻupdater) and stop trigger ((epoch,'epoch')) In addition, specify the save destination of the file created by the extension.


Next is finally learning! Let's add a useful extension before. chainer has an extension called ** Trainer Extension **.

        #extension settings
        #Schematicize the process flow
        trainer.extend(extensions.dump_graph('main/loss'))
        #Write snapshot for each learning
        trainer.extend(extensions.snapshot(), trigger=(epoch, 'epoch'))
        # log(JSON format)writing
        trainer.extend(extensions.LogReport())
        #Plot the loss value on the graph
        trainer.extend(
                extensions.PlotReport('main/loss', 'epoch', file_name='loss.png'))
        #Plot prediction accuracy on a graph
        trainer.extend(
                extensions.PlotReport('main/accuracy', 'epoch', file_name='accuracy.png'))
        #"Number of learnings" for each learning,Loss value,Prediction accuracy,Output "elapsed time"
        trainer.extend(extensions.PrintReport(
                ['epoch', 'main/loss', 'main/accuracy', 'elapsed_time']))
        #Progress bar display
        trainer.extend(extensions.ProgressBar())

here, -A function that writes the input data and parameter flow as the following DOT file -A function that snapshots information such as parameters at the end of learning (Learning can be resumed from the middle by using snapshot) -A function that writes out the history of loss value and prediction accuracy during learning in JSON format. -A function that plots the loss value and prediction accuracy on a graph and exports it in PNG format. -A function to output the number of learnings, loss value, prediction accuracy, and elapsed time for each learning ・ Function to display progress bar Is given. There are some other extensions as well. Trainer Extension Reference In addition, the generated DOT file and PNG file are generated in the save destination specified by training.Trainer ().


After adding the extension, learning is finally started.

        #Start learning
        trainer.run()

With this one line, everything starts (?) Let's wait until the learning is finished.

After learning, save the parameters.

        #Save parameters
        print("___Saving parameter...")
        param_name = os.path.join(out_path, "face_recog.model") #Learned parameter save destination
        chainer.serializers.save_npz(param_name, self.model) #Write trained parameters in NPZ format
        print("___Successfully completed\n\n")

In chainer.serializers.save_npz (), set the parameter save destination (param_name) and network model (self.model). If specified, the parameters will be saved in NPZ format. This parameter is used to actually recognize the face.

main function

The main function is omitted because there is no place to explain it.

main function


def main():

    #Command line options
    parser = arg.ArgumentParser(description='Face Recognition train Program(Chainer)')
    parser.add_argument('--data_dir', '-d', type=str, default=None,
                        help='Specifying the folder path(Error when not specified)')
    parser.add_argument('--out', '-o', type=str, 
                        default=os.path.dirname(os.path.abspath(__file__))+'/result'.replace('/', os.sep),
                        help='Specify the save destination of parameters(Default value./result)')
    parser.add_argument('--batch_size', '-b', type=int, default=32,
                        help='Specifying mini-batch size(Default value 32)')
    parser.add_argument('--epoch', '-e', type=int, default=15,
                        help='Specifying the number of learning(Default value 15)')
    parser.add_argument('--gpu', '-g', type=int, default=-1,
                        help='Specify GPU ID(Negative values indicate CPU processing,Default value-1)')
    args = parser.parse_args()

    #Folder not specified->exception
    if args.data_dir == None:
        print("\nException: Folder not specified.\n")
        sys.exit()
    #When specifying a folder that does not exist->exception
    if os.path.exists(args.data_dir) != True:
        print("\nException: Folder {} is not found.\n".format(args.data_dir))
        sys.exit()

    #Setting information output
    print("=== Setting information ===")
    print("# Images folder: {}".format(os.path.abspath(args.data_dir)))
    print("# Output folder: {}".format(args.out))
    print("# Minibatch-size: {}".format(args.batch_size))
    print("# Epoch: {}".format(args.epoch))
    print("===========================")

    #Data set creation
    train_set = create_dataset(args.data_dir)

    #Start learning
    print("___Start training...")
    Trainer = trainer()
    Trainer.train(train_set, args.batch_size, args.epoch, args.gpu, args.out)
   
if __name__ == '__main__':
    main()

** About GPU processing ** I have built the environment so that it can be processed by GPU, so The following processing is described. It doesn't matter if it exists or not, and it doesn't have to be processed by the GPU. However, unlike Tensorflow, Chainer has a long learning time, so we recommend processing with GPU if possible. .. (Depending on the environment and the problem to be solved) I will omit the construction of GPU processing environment.

        if gpu >= 0:
            chainer.cuda.get_device(gpu).use() #Get device object
            self.model.to_gpu()  #Copy input data to specified device

Caution

The process below counts how many training data there are, ** One more sheet may be added **. This is because it includes a thumbnail cache called Thumbs.db. ~~ It's annoying, so ~~ I don't take this into consideration when counting. However, there is no problem because it is processed so that it will pass through when creating the dataset.

    for c in os.listdir(data_dir):
        d = os.path.join(data_dir, c)
        print("Number of image in a directory \"{}\": {}".format(c, len(os.listdir(d))))

in conclusion

I posted to Qiita for the first time this time, but I am worried because there are many uneasy points ... As mentioned in the overview, please comment if there is something wrong with it. to correct.

Next time, because it's the prediction phase, I'll use the camera to recognize the face ... I can't show my face, so I'm planning to use a public figure's face image instead.

In addition to Chainer, you can implement a face recognition program with Tensorflow (tf.keras), try to visualize filters and feature maps, etc. I tried various things such as using the hyperparameter optimization framework "Optuna", so I hope I can post in the future. (Although it will be in the form of a memorandum)

Recommended Posts

Now, let's try face recognition with Chainer (learning phase)
Try face recognition with Python
Try face recognition with Generated Photos
Try face recognition with python + OpenCV
First Anime Face Recognition with Chainer
Try Common Representation Learning with chainer
Try with Chainer Deep Q Learning --Launch
[Python3] [Ubuntu16] [Docker] Try face recognition with OpenFace
Face recognition with Edison
Easy face recognition try with Jetson Nano and webcam
Face recognition with Python's OpenCV
Try implementing RBM with chainer.
Face recognition with Amazon Rekognition
Try Deep Learning with FPGA
Face recognition / cutting with OpenCV
Try machine learning with Kaggle
Let's move word2vec with Chainer and see the learning progress
Reinforcement learning 13 Try Mountain_car with ChainerRL.
Try deep learning with TensorFlow Part 2
Try horse racing prediction with Chainer
[Chainer] Learning XOR with multi-layer perceptron
Try machine learning with scikit-learn SVM
Reinforcement learning 8 Try using Chainer UI
Face recognition with camera with opencv3 + python2.7
Introduction to Deep Learning (2) --Try your own nonlinear regression with Chainer-
I tried face recognition with OpenCV
Classify anime faces with deep learning with Chainer
Try Bitcoin Price Forecasting with Deep Learning
Let's try gRPC with Go and Docker
Face recognition of anime characters with Keras
Try deep learning of genomics with Kipoi
[python, openCV] base64 Face recognition with images
Serverless face recognition API made with Python
Reinforcement learning 11 Try OpenAI acrobot with ChainerRL.
Categorize face images of anime characters with Chainer
Try montage your face with an AB test
Do image recognition with Caffe model Chainer Yo!