Skip to content

Convolutional neural network

In machine learning, a convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field. Convolutional networks were inspired by biological processes and are variations of multilayer perceptrons which are designed to use minimal amounts of preprocessing. They are widely used models for image and video recognition.

Convolutional neural network는 기존의 Neural network와 같다. 다만 다른 점은 Convolution을 이용해서 유의미한 자질을 추출한다는 점에서 다르다.

Introduce

당신에게 이미지들과 그 이미지에 사람이 있는지 없는지에 대한 label이 있다. 당신의 목적은 이미지가 주어졌을 때 사람이 있는지 없는지를 추측하는 것이다. 물론 기본적인 방법으로 SIFT feature를 추출해서 분류모델을 적용하면 되겠지만 정확도가 좋지 않다. 사람이 멀리 있을 경우, 물구나무를 서고 있을 경우, 가까이 있을 경우 각각 자질이 달라지기 때문이다. 이를 Deep learning을 이용해서 분류하고 싶다고 할 때 Convolutional neural network를 이용하면 된다.

Category

Terams

  • Filter: 입력된 이미지에 Convolution을 적용할 Matrix 또는 값.
  • Feature map: 이미지가 Convolution filter를 거친 결과 데이터.

Convolution calculate example

Theory_-_convolutional_networks.gif

매트릭스처럼 아래와 같이 생각하면 된다.

Ratsgos_blog_-_CNN_Forward_Pass.png

CNN Backpropagation

Average Pooling

현재 지점의 그래디언트는 미분의 연쇄법칙(chain rule)에 의해 흘러들어온 그래디언트에 로컬그래디언트를 곱한 것과 같습니다. 평균은 모든 값을 더한 뒤 개체수로 나누어 구하게 되는데요. 만약에 m 개 요소로 구성돼 있다고 한다면 Average Pooling을 하는 지점의 로컬 그래디언트는 1/m 이 됩니다.

Ratsgos_blog_-_Average_Pooling_Backpropagation.png

Max Pooling

최대값으로 풀링을 했다고 하면 역전파 과정은 아래와 같습니다. 즉, 최대값이 속해 있는 요소의 로컬 그래디언트는 1, 나머지는 0이기 때문에 여기에 흘러들어온 그래디언트를 곱해 구하게 됩니다.

Ratsgos_blog_-_Max_Pooling_Backpropagation.png

CNN

Ratsgos_blog_-_CNN_Backpropagation.png

Layer

일반적으로 몇 개의 층으로 이루어져 있으며 기본적으로 세 가지의 다른 층을 가지고 있다.

  • Convolution layer: Convolution feature를 추출하는 layer. 유의미한 자질을 추출하는 층이다.
  • Polling layer: 일반적으로 CNN은 이미지에 적용된다. 이미지 특성상 픽셀의 갯수가 너무 많아 자질을 줄이기 위해 sub-sampling하는 과정을 polling이라고 부른다.
  • Feedforward layer: 마지막으로 적용되며, Convolution, Polling layer에서 나온 자질들을 이용해서 분류를 할 때 사용된다. 일반적인 Neural Network처럼 행동한다.

일반적인 CNN은 구조가 아래와 같이 이루어져 있다.

Convolution layer -> Polling layer -> Convolution layer -> Polling layer -> ... -> Feedforward layer

즉 Convolution layer와 Polling layer를 번갈아가면서 사용하여 자질을 추출한 후 마지막으로 Feddforward layer를 통해서 분류를 한다.

직관적으로 Pooling layer는 이미지의 크기를 줄인다는 의미로 보면 된다. 이미지 크기를 줄인 후에 Convolution 자질을 추출할 경우 위의 예시에서 보였던 문제 중 큰 사람과 작은 사람을 둘 다 Detect할 수 있게 된다. 이러한 이유로 Pooling layer와 Convolution layer를 번갈아가면서 사용하게 되며 당연히 층이 많아질수록 분류 성능은 더 좋아질 것이다.

Cnn_layer.png

Fully Connected Network

모든 입력층과 은닉층(Hidden Layer)의 연결에 필요한 파라미터들을 계산하는 것이 구현 가능했으나 다소 큰 이미지들 96*96 과 같은 실제로 발생할 이미지들에 대해서는 Fully Connected는 부적합한 면이 있다.

Locally Connected Network

입력에서 근접한 영역의 pixel 만 은닉층의 노드에 연결하는 것이다.

Vision Layers

  • Convolution
  • Pooling
  • Local Response Normalization (LRN)
  • im2col

Models

아래와 같은 잘 알려진 모델이 존재한다.

Table of Machine learning

Table of Machine learning

Learning Course

이론 (Theory)

확률론 (Probability theory)베이즈 이론 (Bayesian probability)

Machine learning

HypothesisLoss functionGradient descentOverfitting

Neural network

Artificial neuronPerceptronMultilayer perceptron(Example)Feedforward neural networkActivation function (Sigmoid function & Euler's number)Softmax function (Loss function)Backpropagation (Gradient descent)(Example)

딥 러닝 (Deep learning)

합성곱 신경망 (Convolutional neural network) (CNN) & Region-based Convolutional Network (RCNN, R-CNN)

ETC

Tutorials

이론 (Theory)

Basic

용어 (Terms)

확률론 (Probability theory)

베이즈 이론 (Bayesian probability), 결정이론 (Decision Theory), 확률 밀도 함수 (Probability Density Function) (PDF), 방사형 기저 함수 (Radial basis function) (RBF), Hyperparameter

신경과학 (Neuroscience)d

뉴런 (Neuron)

통계학 (Statistics)

공분산 (Covariance), 통계 분류, 분류행렬 (Confusion matrix), 교차 검증 (Cross-validation), 평균 제곱근 편차 (Root-mean-square deviation) (MESD), Mean squared error

그래프 이론 (Graph theory)

행렬 (Matrix)

General matrix-matrix multiplication (gemm), Toeplitz matrix, im2col

기타

Computational learning theory, Empirical risk minimization, Occam learning, PAC learning, Statistical learning, VC theory, 베이즈 네트워크, 마르코프 임의장 (Markov random field), Hidden Markov Model (HMM), Conditional Random Field, 정규 베이즈 분류기 (Normal Bayes Classifier), Energy Based Model, 오컴의 면도날 (Occam's razor), Ground truth

알고리즘 유형

지도 학습 (Supervised Learning)

서포트 벡터 머신 (Support Vector Machine) (SVM), Hidden Markov model, 회귀 분석 (Regression analysis), 신경망 (Neural network), 나이브 베이즈 분류 (Naive Bayes classifier), K-근접이웃 (K-Nearest Neighbor) (K-NN), Decision trees, Ensembles (Bagging, Boosting, Random forest), Relevance vector machine (RVM)

자율 학습 (Unsupervised learning)

군집화 (Clustering), 독립 성분 분석 (Independent component analysis)

준 지도 학습 (Semi-supervised learning)

Generative models, Low-density separation, Graph-based methods, Heuristic approaches

기타 학습

강화 학습 (Reinforcement learning), 심화 학습

주제별

구조적 예측 (Structured prediction)

Graphical models (Bayes net, CRF, HMM)

모수 추정 알고리즘

동적 계획법 (Dynamic programming), 기대값 최대화 알고리즘 (EM algorithm)

근사 추론 기법

몬테 카를로 방법, 에이다 부스트 (AdaBoost)

접근 방법

결정 트리 학습법, 연관 규칙 학습법, 유전 계획법, 귀납 논리 계획법, 클러스터링, 베이지안 네트워크, 강화 학습법, 표현 학습법, 동일성 계측 학습법

모형화

신경망 (Neural network), SVM K-NN, 결정 트리, 유전 알고리즘 (Genetic Algorithm), 유전자 프로그래밍, 가우스 과정 회귀, 선형 분별 분석, 퍼셉트론, 방사 기저 함수 네트워크

Recommender system (추천 시스템)

Collaborative filtering (협업 필터링), Content-based filtering (컨텐츠 기반 필터링), Hybrid recommender systems

데이터 마이닝 (Data mining)

교차 검증 (Cross-validation) (k-fold), Data set (Training set, Validation set, Test set)

회귀 분석 (Regression analysis)

선형 회귀 (Linear regression), 로지스틱 회귀 (Logistic Regression), Logit function, Multinomial logistic regression (Softmax Regression)

군집화 (Clustering)

k-means clustering, BIRCH, Hierarchical, Expectation-maximization (EM), DBSCAN, OPTICS, Mean-shift

종류별

3D Machine Learning

인공신경망 (Artificial Neural Networks; ANN)

인공 뉴런 (Artificial neuron)

퍼셉트론 (Perceptron), Sigmoid neuron

합성함수 (Combination function) & 활성함수 (Activation function)

Sigmoid function, Rectified linear units (ReLU), 1x1 Convolution

손실 함수 (Loss function) or Cost function

Softmax, Sum of squares (Euclidean), Hinge / Margin, Cross entropy, Infogain, Accuracy and Top-k

알고리즘

다층 퍼셉트론 (Multilayer perceptron) (MLP), Feed-forward Neural Network (FNN), Long short-term memory (LSTM), Autoencoder, 자기조직화지도 (Self-organizing map) (SOM), Network In Network (NIN), Adaptive neuro fuzzy inference system (ANFIS)

딥 러닝 (Deep learning)

심층 신경망 (Deep neural network) (DNN), 합성곱 신경망 (Convolutional neural network) (CNN) & Regions with Convolutional Neural Network (RCNN, R-CNN, SSD), 순환 신경망 (Recurrent neural network) (RNN), Gated Recurrent Unit (GRU), 제한 볼츠만 머신 (Restricted Boltzmann Machine) (RBM), 심층 신회 신경망 (Deep Belief Network) (DBN), 심층 Q-네트워크(Deep Q-Networks), Deep hyper network (DHN), Deconvolutional Network, Fully Convolutional Networks for Semantic Segmentation (FCN), Generative Adversarial Networks (GAN), Learning Deconvolution Network for Semantic Segmentation, Multi-Scale Context Aggregation by Dilated Convolutions (Dilated Convolutions), Dynamic Routing Between Capsules (Capsules Network), YOLO, Path Aggregation Network for Instance Segmentation (PANet), Image-to-Image Translation with Conditional Adversarial Networks (pix2pix), CycleGAN, BicycleGAN, SlowFast, Kinetics, AVA, MelGAN, EfficientDet, SinGAN, Panoptic Segmentation, U-Net, MONAI, CenterNet, DeepLab, HRNet, OCRNet, ResNeSt

Models

GoogLeNet, LeNet, AlexNet, ReNet, RCNN, SPPnet, Fast R-CNN, Faster R-CNN, VGG, SqueezeNet, PointNet, Mask R-CNN, MaskLab, OSMN (Video Segmentation), ColorNet, EfficientNet

Problems

차원 축소 (Dimensionality reduction)

Curse of dimensionality, Factor analysis, CCA, 독립 성분 분석 (ICA), LDA, 음수 미포함 행렬 분해 (NMF), 주성분 분석 (PCA), t-SNE

과소적합 (Underfitting) & 과적합 (Overfitting)

Early stopping, Model selection, Normalization, Regularization (L1, L2, Dropout), Generalization

초기화 (Initialization)

Xavier Initialization

ETC

편향-분산 딜레마 (Bias-variance dilemma), Vanishing gradient problem, Local minimum problem, Batch Normalization, 표준화 (Standardization), AutoML

최적화 (Optimization)

Hyperparameter, 담금질 모사 (Simulated annealing), Early stopping, Feature scaling, Normal Equation, 경사 하강법 (Gradient descent), Stochastic gradient descent (SGD), 오류역전파 (Backpropagation), Convex optimization, Performance Tuning, im2col, Dense-Sparse-Dense Training for Deep Neural Networks (DSD), Deep Compression, Pruning Neural Networks, Shake-Shake regularization, Accurate Large Minibatch SGD

ETC

Libraries

OpenCV, Deeplearning4j (DL4J), Torch, PyTorch, 테아노 (Theano), Caffe, TensorFlow, MMDetection, ConvNetJS, cuDNN, Netscope, Microsoft Azure Machine Learning Studio, OpenPose, DensePose, Keras, tiny-dnn, Detectron, Stanford CoreNLP, Aifiddle, Kubeflow, OpenNMT, alibi-detect, Flashlight (facebook), MediaPipe, Weights and Biases

Dataset

MNIST (손 글씨 데이터), ImageNet, CIFAR-10, TinyImages, PASCAL VOC, COCO, AVSS, YouTube-VOS

Annotation tools

labelme, cvat, f-BRS, COCO Annotator

See also

인공지능, 자동 로봇, 생체 정보학, 컴퓨터 지능, 컴퓨터 시각, 데이터 마이닝, 패턴 인식, 빅데이터

Unknown keyward

Classification, Anomaly detection, Association rules, Reinforcement learning, Structured prediction, Feature engineering, Feature learning, Online learning, Semi-supervised learning, Unsupervised learning, Learning to rank, Grammar induction, Local outlier factor, Minimum Description Length (MDL), Bayesian MAP, Structural Risk Minimization (SRM), Long Narrow Valley, Word2vec, Autoregressive integrated moving average (ARIMA)

Documentation

Bayesian probability

Bayesian Reasoning and Machine Learning
http://web4.cs.ucl.ac.uk/staff/D.Barber/pmwiki/pmwiki.php?n=Brml.Online
http://web4.cs.ucl.ac.uk/staff/D.Barber/textbook/020217.pdf
020217_-_Bayesian_Reasoning_and_Machine_Learning.pdf

Deep learning

ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf
http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
ImageNet_Classification_with_Deep_Convolutional_Neural_Networks.pdf
[한글 번역] 깊은 컨볼루셔널 신경망을 이용한 이미지네트(ImageNet) 분류
ImageNet_Classification_with_Deep_Convolutional_Neural_Networks_-_ko.pdf
Going deeper with convolutions (GoogleNet)
http://arxiv.org/pdf/1409.4842v1.pdf
Going_deeper_with_convolutions.pdf
번역: GoogleNet
특집원고 딥하이퍼넷 모델 (Deep Hypernetwork models) (서울대학교/장병탁)
Deep_Hypernetwork_models_201508.pdf
[Mocrosoft] Deep Residual Learning for Image Recognition (Winner ILSVRC2015)
Deep_Residual_Learning_for_Image_Recognition(Winner_ILSVRC2015)_Microsoft.pdf
[Microsoft] Fast R-CNN, Towards Real-Time Object Detection with Region Proposal Networks (Winner ILSVR2015)
Fast_R-CNN,Towards_Real-Time_Object_Detection_with_Region_Proposal_Networks(Winner_ILSVR2015)_Microsoft.pdf
Learning Deconvolution Network for Semantic Segmentation
http://cvlab.postech.ac.kr/research/deconvnet/
Deep EXpectation of apparent age from a single image
https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/
Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks.
Inside_outside_net_detecting_objects_in_context_with_skip_pooling_and_recurrent_neural_networks_2015.pdf
Small Object 탐지방법에 관한 논문.
조대협의 블로그 - 수학포기자를 위한 딥러닝과 텐서플로우의 이해
http://bcho.tistory.com/1208
Machine_learning_ebooks_-_Machine_learning_for_those_who_abandon_math.pdf
Densely Connected Convolutional Networks (DenseNets)
Deep learning CVPR2017 최고 논문상
Deep Learning Interviews book: Hundreds of fully solved job interview questions from a wide range of key topics in AI.
딥러닝 인터뷰 북
머신러닝을 배우는 석/박사 과정 및 구직자들을 위한 실전 질문과 솔루션 모음
인쇄본 구입도 가능하지만, 전체 PDF는 무료로 공개

Tutorials

딥러닝 제대로 시작하기 (지은이 오카타니 타카유키/옮긴이 심효섭)
Deep_Learning_-Takayuki_Okatani-2015-_sample.pdf
머신러닝 입문 가이드 - IDG Deep Dive
http://www.itworld.co.kr/techlibrary/97428
IDG_DeepDive_Machine_learning-20160113.pdf
딥러닝의 이해 (미발간; 2016-08-22 ver)
Understanding_deep_learning_0822.pdf
Fundamental of Reinforcement Learning
https://www.gitbook.com/book/dnddnjs/rl/details
Fundamental_of_Reinforcement_Learning.pdf
모두의연구소 - 강화 학습의 기본
Deep Learning Papers Reading Roadmap (딥러닝 논문 로드맵)
https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadmap/blob/master/README.md
Deep_Learning_Papers_Reading_Roadmap.md.zip
[추천] Machine Learning-based Web Exception Detection (금융보안원 프로젝트 관련 참조사이트!)
https://cloudfocus.aliyun.com/Machine-Learning-based-Web-Exception-Detection-89782?spm=a2c1b.a2c1b4.a2c1b4.16.ZSQoEd
Machine_Learning-based_Web_Exception_Detection_-Insights_and_Trends-_Alibaba_Cloud_Focus.pdf
머신러닝 기초 1~57편 (잡동사니 탐구 - 참스터디 ePaiai : 네이버 블로그)
http://sams.epaiai.com/220498694383
Microsoft, ML for Beginners 강의 공개
MS Azure Clouds Advocates 팀이 만든 12주, 24강짜리 커리큘럼
Scikit-learn을 이용한 클래식 머신러닝 강의 (딥러닝은 별도 AI 강의로 나올 예정)
https://github.com/microsoft/ML-For-Beginners

Compress model

Deep Compression and EIE - Deep Neural Network Model Compression and Efficient Inference Engine
Deep_compression_and_EIE_PPT.pdf
Learning bothWeights and Connections for Efficient Neural Networks
Learning_both_weights_and_connections_for_efficient_neural_networks_2015.pdf

Convolutional neural network

Reveal.js를 사용한 CNN 프레젠테이션 (Presentation).
Author - 나
Reveal-ml.tar.gz
Gradient-Based Learning Applied to Document Recognition
http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
Gradient-Based_Learning_Applied_to_Document_Recognition.pdf
Visualizing and Understanding Convolutional Neural Networks
http://arxiv.org/abs/1311.2901
1311.2901v3.pdf
Compressing CNN for Mobile Device (Samsung) - CNN 모델 압축의 필요성 etc ...
[http://mlcenter.postech.ac.kr/files/attach/workshop_fall_2015/삼성전자_김용덕_박사.pdf](http://mlcenter.postech.ac.kr/files/attach/workshop_fall_2015/삼성전자_김용덕_박사.pdf)
Samsung_-_Compressing_CNN_for_Mobile_Device.pdf
Using Filter Banks in Convolutional Neural Networks for Texture Classification
https://arxiv.org/abs/1601.02919

Deep belief network

The Applications of Deep Learning on Traffic Identification
Us-15-Wang-The-Applications-Of-Deep-Learning-On-Traffic-Identification-wp.pdf
https://www.blackhat.com/docs/us-15/materials/us-15-Wang-The-Applications-Of-Deep-Learning-On-Traffic-Identification-wp.pdf

Deconvolution neural network

Learning Deconvolution Network for Semantic Segmentation
http://cvlab.postech.ac.kr/research/deconvnet/
https://arxiv.org/abs/1505.04366
1505.04366.pdf

Segmentation

Learning to Segment (Facebook Research)
https://research.fb.com/learning-to-segment/
DeepMask+SharpMask as well as MultiPathNet.
Recurrent Instance Segmentation
https://arxiv.org/abs/1511.08250
1511.08250.pdf
Slideshare - Single Shot MultiBox Detector와 Recurrent Instance Segmentation
vid2vid
https://github.com/NVIDIA/vid2vid
Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic video-to-video translation.
It can be used for turning semantic label maps into photo-realistic videos, synthesizing people talking from edge maps, or generating human motions from poses.

Fire Detection

Fire Detection#Deep learning based에 정리한다.

Background subtraction

Background subtraction#Deep learning based에 정리한다.

LSTM

Long short-term memory에 정리한다.

Learning

Siamese Neural Networks for One-Shot Image Recognition
https://jayhey.github.io/deep%20learning/2018/02/06/saimese_network/
딥러닝에서 네트워크를 학습시킬 때, 매우 많은 트레이닝 데이터가 필요합니다. 이러한 단점을 극복하여 한 레이블 당 하나의 이미지만 있어도 분류할 수 있게 학습시키는게 one-shot learning입니다.

NVIDIA AI Developer Newsletter

[추천] AI Can Transform Anyone Into a Professional Dancer
https://news.developer.nvidia.com/ai-can-transform-anyone-into-a-professional-dancer/
https://arxiv.org/abs/1808.07371
Transforming Standard Video Into Slow Motion with AI
https://news.developer.nvidia.com/transforming-standard-video-into-slow-motion-with-ai/
NVIDIA SPLATNet Research Paper Wins a Major CVPR 2018 Award
https://news.developer.nvidia.com/nvidia-splatnet-research-paper-wins-a-major-cvpr-2018-award/
AI Learns to Play Dota 2 with Human Precision
https://news.developer.nvidia.com/ai-learns-to-play-dota-2-with-human-precision/
[추천] This AI Can Automatically Remove the Background from a Photo
https://news.developer.nvidia.com/this-ai-can-automatically-remove-the-background-from-a-photo/
NVDLA Deep Learning Inference Compiler is Now Open Source
https://devblogs.nvidia.com/nvdla/

Nature

Deep learning of aftershock patterns following large earthquakes
https://www.reddit.com/r/MachineLearning/comments/9bo9i9/r_deep_learning_of_aftershock_patterns_following/
https://www.nature.com/articles/s41586-018-0438-y
https://drive.google.com/file/d/1DSqLgFZLuNJXNi2dyyP_ToIGHj94raWX/view

Cancer

Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening
https://medium.com/@jasonphang/deep-neural-networks-improve-radiologists-performance-in-breast-cancer-screening-565eb2bd3c9f

Text Generation

Text Generation항목 참조.

VizSeq - A Visual Analysis Toolkit for Text Generation Tasks
https://arxiv.org/abs/1909.05424

See also

Favorite site

Machine learning

E-book

데이터 분석

Artificial neural network

Convolutional neural network

Segmentation

Deep learning

Deep learning libraries

Andrew Ng

Machine Learning Exercises In Python

Guides

Tutorials

Data pool

Service

기안서에 사용하기 좋은 샘플 이미지

Online demo

References


  1. Ratsgos_blog_-Deep_Learning-_CNN_backpropagation.pdf 

  2. Machine_Learning_-_OpenCV_ML_algorithms.pdf 

  3. S1_nn.pdf 

  4. Sigmoid_Function_of_Artificial_Neural_Network.pdf 

  5. Aistudy.co.kr_-_theory_oh.pdf 

  6. VGG_Convolutional_Neural_Networks_Practical.pdf 

  7. ConvNets_24_April.pdf 

  8. Forward_and_Backward_Propagation_of_the_Convolutional_Layer.pdf 

  9. The_easiest_way_to_understand_CNN_backpropagation.pdf 

  10. A_Look_at_Image_Segmentation_using_CNNs_–_Mohit_Jain.pdf 

  11. Artificial_Intelligence_6-Deep_Learning.pdf 

  12. The_Devil_is_always_in_the_Details_-_Must_Know_Tips_and_Tricks_in_DNNs.pdf 

  13. Openresearch.ai-190921.zip 

  14. Stanford-machine_learning-andrew_ng-lectures.tar.gz 

  15. OLD URL: https://class.coursera.org/ml-005/lecture 

  16. Sanghyuk's Github pages blog using octropress 

  17. SanghyukChun.github.io-006e2ae.tar.gz 

  18. 1ambda.github.io-19962c5.tar.gz 

  19. Machine_Learning_Exercises_In_Python_-_Part_1.pdf 

  20. Machine_Learning_Exercises_In_Python_-_Part_2.pdf 

  21. Machine_Learning_Exercises_In_Python_-_Part_3.pdf 

  22. Machine_Learning_Exercises_In_Python_-_Part_4.pdf 

  23. Machine_Learning_Exercises_In_Python_-_Part_5.pdf 

  24. Machine_Learning_Exercises_In_Python_-_Part_6.pdf 

  25. Machine_Learning_Exercises_In_Python_-_Part_7.pdf 

  26. Machine_Learning_Exercises_In_Python_-_Part_8.pdf 

  27. Samsung_software_membership_-_Neural_network_01.pdf 

  28. Samsung_software_membership_-_Neural_network_02.pdf 

  29. GPU_-how_much_faster_than_CPU-_NeoBrain.pdf 

  30. 1ambda.github.io-master-5c58cab.zip 

  31. Gitbook_-leonardoaraujosantos-_artificial-inteligence.pdf 

  32. 170915_Tensorflow_Seminar_From_Deep_Learning_Theory_to_Practice_-_POSTECH.pdf 

  33. Fueling_the_Gold_Rush_-The_Greatest_Public_Datasets_for_AI-Startup_Grind-_Medium.pdf