Kerasによるモデル作成

  • MLP(多層パーセプトロン)
  • CNN(畳み込みニューラルネットワーク)
    のモデル作成方法の習得。
    ※ここでは、ハイパーパラメータチューニングは省略

Kerasでアヤメのデータ分類

In [ ]:
# 上記モデリングをKerasでコーディングした場合が以下。

import tensorflow as tf
import tensorflow.contrib.keras as keras
import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

In [ ]:
# アヤメデータ読み込み
iris = load_iris()

# ターゲット変数をone-hotベクトル化
target_one_hot = pd.DataFrame({'iris': iris.target})
target_one_hot = pd.get_dummies(target_one_hot['iris'])

# one-hotベクトル化したターゲット変数をnp.array化
y_nums = target_one_hot.to_numpy()
y_nums[:5]
Out[ ]:
array([[1, 0, 0],
       [1, 0, 0],
       [1, 0, 0],
       [1, 0, 0],
       [1, 0, 0]], dtype=uint8)
In [ ]:
# 学習データの用意と分割

# 学習データ
x_data = iris.data

# 訓練とテスト用に分割
x_train, x_test, y_train, y_test = train_test_split(x_data, y_nums, test_size=0.2, random_state=8)
In [ ]:
# モデル定義
Dense = keras.layers.Dense
model = keras.models.Sequential()
model.add(Dense(10, activation='relu', input_shape=(4,)))
model.add(Dense(3, activation='softmax'))

# モデル構築
model.compile(
    loss = 'categorical_crossentropy',
    optimizer = 'adam',
    metrics = ['accuracy']
)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
In [ ]:
# 学習実行
model.fit(x_train, y_train, batch_size=20, epochs=300)
Train on 120 samples
Epoch 1/300
120/120 [==============================] - 1s 11ms/sample - loss: 1.1628 - acc: 0.3250
Epoch 2/300
120/120 [==============================] - 0s 126us/sample - loss: 1.0415 - acc: 0.3250
Epoch 3/300
120/120 [==============================] - 0s 136us/sample - loss: 0.9501 - acc: 0.3333
Epoch 4/300
120/120 [==============================] - 0s 174us/sample - loss: 0.8800 - acc: 0.3833
Epoch 5/300
120/120 [==============================] - 0s 121us/sample - loss: 0.8449 - acc: 0.5500
Epoch 6/300
120/120 [==============================] - 0s 125us/sample - loss: 0.8139 - acc: 0.6667
Epoch 7/300
120/120 [==============================] - 0s 120us/sample - loss: 0.7975 - acc: 0.7583
Epoch 8/300
120/120 [==============================] - 0s 122us/sample - loss: 0.7784 - acc: 0.7833
Epoch 9/300
120/120 [==============================] - 0s 123us/sample - loss: 0.7661 - acc: 0.7750
Epoch 10/300
120/120 [==============================] - 0s 116us/sample - loss: 0.7545 - acc: 0.7667
Epoch 11/300
120/120 [==============================] - 0s 144us/sample - loss: 0.7440 - acc: 0.7750
Epoch 12/300
120/120 [==============================] - 0s 147us/sample - loss: 0.7336 - acc: 0.7917
Epoch 13/300
120/120 [==============================] - 0s 116us/sample - loss: 0.7248 - acc: 0.7833
Epoch 14/300
120/120 [==============================] - 0s 121us/sample - loss: 0.7150 - acc: 0.7667
Epoch 15/300
120/120 [==============================] - 0s 134us/sample - loss: 0.7052 - acc: 0.7750
Epoch 16/300
120/120 [==============================] - 0s 145us/sample - loss: 0.6964 - acc: 0.7833
Epoch 17/300
120/120 [==============================] - 0s 133us/sample - loss: 0.6882 - acc: 0.7833
Epoch 18/300
120/120 [==============================] - 0s 150us/sample - loss: 0.6790 - acc: 0.7833
Epoch 19/300
120/120 [==============================] - 0s 127us/sample - loss: 0.6707 - acc: 0.7833
Epoch 20/300
120/120 [==============================] - 0s 124us/sample - loss: 0.6634 - acc: 0.7917
Epoch 21/300
120/120 [==============================] - 0s 120us/sample - loss: 0.6550 - acc: 0.7833
Epoch 22/300
120/120 [==============================] - 0s 117us/sample - loss: 0.6472 - acc: 0.7917
Epoch 23/300
120/120 [==============================] - 0s 131us/sample - loss: 0.6405 - acc: 0.7833
Epoch 24/300
120/120 [==============================] - 0s 130us/sample - loss: 0.6331 - acc: 0.7833
Epoch 25/300
120/120 [==============================] - 0s 126us/sample - loss: 0.6265 - acc: 0.7833
Epoch 26/300
120/120 [==============================] - 0s 131us/sample - loss: 0.6198 - acc: 0.7833
Epoch 27/300
120/120 [==============================] - 0s 111us/sample - loss: 0.6140 - acc: 0.8000
Epoch 28/300
120/120 [==============================] - 0s 118us/sample - loss: 0.6065 - acc: 0.8000
Epoch 29/300
120/120 [==============================] - 0s 121us/sample - loss: 0.5997 - acc: 0.7917
Epoch 30/300
120/120 [==============================] - 0s 117us/sample - loss: 0.5932 - acc: 0.7917
Epoch 31/300
120/120 [==============================] - 0s 119us/sample - loss: 0.5881 - acc: 0.7917
Epoch 32/300
120/120 [==============================] - 0s 128us/sample - loss: 0.5826 - acc: 0.7917
Epoch 33/300
120/120 [==============================] - 0s 122us/sample - loss: 0.5760 - acc: 0.8167
Epoch 34/300
120/120 [==============================] - 0s 126us/sample - loss: 0.5703 - acc: 0.7917
Epoch 35/300
120/120 [==============================] - 0s 142us/sample - loss: 0.5660 - acc: 0.7917
Epoch 36/300
120/120 [==============================] - 0s 138us/sample - loss: 0.5604 - acc: 0.8083
Epoch 37/300
120/120 [==============================] - 0s 109us/sample - loss: 0.5549 - acc: 0.8083
Epoch 38/300
120/120 [==============================] - 0s 133us/sample - loss: 0.5498 - acc: 0.8167
Epoch 39/300
120/120 [==============================] - 0s 133us/sample - loss: 0.5446 - acc: 0.8083
Epoch 40/300
120/120 [==============================] - 0s 121us/sample - loss: 0.5403 - acc: 0.8167
Epoch 41/300
120/120 [==============================] - 0s 135us/sample - loss: 0.5353 - acc: 0.8083
Epoch 42/300
120/120 [==============================] - 0s 120us/sample - loss: 0.5311 - acc: 0.8167
Epoch 43/300
120/120 [==============================] - 0s 133us/sample - loss: 0.5259 - acc: 0.8083
Epoch 44/300
120/120 [==============================] - 0s 132us/sample - loss: 0.5221 - acc: 0.8167
Epoch 45/300
120/120 [==============================] - 0s 155us/sample - loss: 0.5180 - acc: 0.8250
Epoch 46/300
120/120 [==============================] - 0s 128us/sample - loss: 0.5130 - acc: 0.8250
Epoch 47/300
120/120 [==============================] - 0s 119us/sample - loss: 0.5090 - acc: 0.8333
Epoch 48/300
120/120 [==============================] - 0s 131us/sample - loss: 0.5050 - acc: 0.8333
Epoch 49/300
120/120 [==============================] - 0s 120us/sample - loss: 0.5014 - acc: 0.8333
Epoch 50/300
120/120 [==============================] - 0s 126us/sample - loss: 0.4977 - acc: 0.8333
Epoch 51/300
120/120 [==============================] - 0s 108us/sample - loss: 0.4933 - acc: 0.8417
Epoch 52/300
120/120 [==============================] - 0s 205us/sample - loss: 0.4899 - acc: 0.8417
Epoch 53/300
120/120 [==============================] - 0s 159us/sample - loss: 0.4862 - acc: 0.8417
Epoch 54/300
120/120 [==============================] - 0s 131us/sample - loss: 0.4832 - acc: 0.8583
Epoch 55/300
120/120 [==============================] - 0s 124us/sample - loss: 0.4794 - acc: 0.8500
Epoch 56/300
120/120 [==============================] - 0s 117us/sample - loss: 0.4760 - acc: 0.8667
Epoch 57/300
120/120 [==============================] - 0s 123us/sample - loss: 0.4720 - acc: 0.8500
Epoch 58/300
120/120 [==============================] - 0s 128us/sample - loss: 0.4689 - acc: 0.8667
Epoch 59/300
120/120 [==============================] - 0s 129us/sample - loss: 0.4659 - acc: 0.8750
Epoch 60/300
120/120 [==============================] - 0s 132us/sample - loss: 0.4624 - acc: 0.8833
Epoch 61/300
120/120 [==============================] - 0s 136us/sample - loss: 0.4608 - acc: 0.8667
Epoch 62/300
120/120 [==============================] - 0s 128us/sample - loss: 0.4584 - acc: 0.8750
Epoch 63/300
120/120 [==============================] - 0s 133us/sample - loss: 0.4532 - acc: 0.8833
Epoch 64/300
120/120 [==============================] - 0s 138us/sample - loss: 0.4501 - acc: 0.8917
Epoch 65/300
120/120 [==============================] - 0s 155us/sample - loss: 0.4472 - acc: 0.9083
Epoch 66/300
120/120 [==============================] - 0s 120us/sample - loss: 0.4444 - acc: 0.9000
Epoch 67/300
120/120 [==============================] - 0s 140us/sample - loss: 0.4417 - acc: 0.8917
Epoch 68/300
120/120 [==============================] - 0s 124us/sample - loss: 0.4388 - acc: 0.9083
Epoch 69/300
120/120 [==============================] - 0s 126us/sample - loss: 0.4359 - acc: 0.9083
Epoch 70/300
120/120 [==============================] - 0s 131us/sample - loss: 0.4334 - acc: 0.9083
Epoch 71/300
120/120 [==============================] - 0s 126us/sample - loss: 0.4310 - acc: 0.9083
Epoch 72/300
120/120 [==============================] - 0s 124us/sample - loss: 0.4277 - acc: 0.9083
Epoch 73/300
120/120 [==============================] - 0s 122us/sample - loss: 0.4251 - acc: 0.9083
Epoch 74/300
120/120 [==============================] - 0s 121us/sample - loss: 0.4243 - acc: 0.9083
Epoch 75/300
120/120 [==============================] - 0s 114us/sample - loss: 0.4200 - acc: 0.9250
Epoch 76/300
120/120 [==============================] - 0s 115us/sample - loss: 0.4172 - acc: 0.9167
Epoch 77/300
120/120 [==============================] - 0s 128us/sample - loss: 0.4151 - acc: 0.9250
Epoch 78/300
120/120 [==============================] - 0s 143us/sample - loss: 0.4124 - acc: 0.9333
Epoch 79/300
120/120 [==============================] - 0s 113us/sample - loss: 0.4100 - acc: 0.9333
Epoch 80/300
120/120 [==============================] - 0s 116us/sample - loss: 0.4078 - acc: 0.9333
Epoch 81/300
120/120 [==============================] - 0s 115us/sample - loss: 0.4049 - acc: 0.9333
Epoch 82/300
120/120 [==============================] - 0s 116us/sample - loss: 0.4028 - acc: 0.9333
Epoch 83/300
120/120 [==============================] - 0s 138us/sample - loss: 0.4003 - acc: 0.9250
Epoch 84/300
120/120 [==============================] - 0s 111us/sample - loss: 0.3985 - acc: 0.9250
Epoch 85/300
120/120 [==============================] - 0s 131us/sample - loss: 0.3956 - acc: 0.9250
Epoch 86/300
120/120 [==============================] - 0s 138us/sample - loss: 0.3932 - acc: 0.9333
Epoch 87/300
120/120 [==============================] - 0s 137us/sample - loss: 0.3914 - acc: 0.9417
Epoch 88/300
120/120 [==============================] - 0s 133us/sample - loss: 0.3890 - acc: 0.9333
Epoch 89/300
120/120 [==============================] - 0s 117us/sample - loss: 0.3874 - acc: 0.9333
Epoch 90/300
120/120 [==============================] - 0s 117us/sample - loss: 0.3845 - acc: 0.9250
Epoch 91/300
120/120 [==============================] - 0s 114us/sample - loss: 0.3823 - acc: 0.9417
Epoch 92/300
120/120 [==============================] - 0s 123us/sample - loss: 0.3805 - acc: 0.9500
Epoch 93/300
120/120 [==============================] - 0s 118us/sample - loss: 0.3781 - acc: 0.9500
Epoch 94/300
120/120 [==============================] - 0s 115us/sample - loss: 0.3759 - acc: 0.9500
Epoch 95/300
120/120 [==============================] - 0s 123us/sample - loss: 0.3740 - acc: 0.9500
Epoch 96/300
120/120 [==============================] - 0s 181us/sample - loss: 0.3715 - acc: 0.9500
Epoch 97/300
120/120 [==============================] - 0s 137us/sample - loss: 0.3695 - acc: 0.9500
Epoch 98/300
120/120 [==============================] - 0s 125us/sample - loss: 0.3674 - acc: 0.9500
Epoch 99/300
120/120 [==============================] - 0s 144us/sample - loss: 0.3656 - acc: 0.9500
Epoch 100/300
120/120 [==============================] - 0s 115us/sample - loss: 0.3646 - acc: 0.9500
Epoch 101/300
120/120 [==============================] - 0s 116us/sample - loss: 0.3613 - acc: 0.9500
Epoch 102/300
120/120 [==============================] - 0s 139us/sample - loss: 0.3602 - acc: 0.9500
Epoch 103/300
120/120 [==============================] - 0s 126us/sample - loss: 0.3577 - acc: 0.9500
Epoch 104/300
120/120 [==============================] - 0s 122us/sample - loss: 0.3553 - acc: 0.9500
Epoch 105/300
120/120 [==============================] - 0s 136us/sample - loss: 0.3539 - acc: 0.9500
Epoch 106/300
120/120 [==============================] - 0s 117us/sample - loss: 0.3524 - acc: 0.9500
Epoch 107/300
120/120 [==============================] - 0s 178us/sample - loss: 0.3498 - acc: 0.9500
Epoch 108/300
120/120 [==============================] - 0s 153us/sample - loss: 0.3478 - acc: 0.9500
Epoch 109/300
120/120 [==============================] - 0s 152us/sample - loss: 0.3463 - acc: 0.9500
Epoch 110/300
120/120 [==============================] - 0s 125us/sample - loss: 0.3442 - acc: 0.9583
Epoch 111/300
120/120 [==============================] - 0s 114us/sample - loss: 0.3429 - acc: 0.9500
Epoch 112/300
120/120 [==============================] - 0s 120us/sample - loss: 0.3412 - acc: 0.9500
Epoch 113/300
120/120 [==============================] - 0s 116us/sample - loss: 0.3391 - acc: 0.9500
Epoch 114/300
120/120 [==============================] - 0s 118us/sample - loss: 0.3370 - acc: 0.9583
Epoch 115/300
120/120 [==============================] - 0s 127us/sample - loss: 0.3361 - acc: 0.9500
Epoch 116/300
120/120 [==============================] - 0s 127us/sample - loss: 0.3329 - acc: 0.9500
Epoch 117/300
120/120 [==============================] - 0s 133us/sample - loss: 0.3314 - acc: 0.9500
Epoch 118/300
120/120 [==============================] - 0s 130us/sample - loss: 0.3293 - acc: 0.9583
Epoch 119/300
120/120 [==============================] - 0s 135us/sample - loss: 0.3277 - acc: 0.9583
Epoch 120/300
120/120 [==============================] - 0s 123us/sample - loss: 0.3260 - acc: 0.9583
Epoch 121/300
120/120 [==============================] - 0s 147us/sample - loss: 0.3241 - acc: 0.9583
Epoch 122/300
120/120 [==============================] - 0s 148us/sample - loss: 0.3229 - acc: 0.9583
Epoch 123/300
120/120 [==============================] - 0s 125us/sample - loss: 0.3206 - acc: 0.9500
Epoch 124/300
120/120 [==============================] - 0s 115us/sample - loss: 0.3200 - acc: 0.9500
Epoch 125/300
120/120 [==============================] - 0s 127us/sample - loss: 0.3172 - acc: 0.9583
Epoch 126/300
120/120 [==============================] - 0s 119us/sample - loss: 0.3154 - acc: 0.9583
Epoch 127/300
120/120 [==============================] - 0s 113us/sample - loss: 0.3142 - acc: 0.9583
Epoch 128/300
120/120 [==============================] - 0s 111us/sample - loss: 0.3128 - acc: 0.9583
Epoch 129/300
120/120 [==============================] - 0s 125us/sample - loss: 0.3108 - acc: 0.9583
Epoch 130/300
120/120 [==============================] - 0s 136us/sample - loss: 0.3098 - acc: 0.9583
Epoch 131/300
120/120 [==============================] - 0s 133us/sample - loss: 0.3073 - acc: 0.9583
Epoch 132/300
120/120 [==============================] - 0s 142us/sample - loss: 0.3060 - acc: 0.9583
Epoch 133/300
120/120 [==============================] - 0s 133us/sample - loss: 0.3042 - acc: 0.9583
Epoch 134/300
120/120 [==============================] - 0s 127us/sample - loss: 0.3030 - acc: 0.9583
Epoch 135/300
120/120 [==============================] - 0s 127us/sample - loss: 0.3011 - acc: 0.9583
Epoch 136/300
120/120 [==============================] - 0s 134us/sample - loss: 0.2993 - acc: 0.9583
Epoch 137/300
120/120 [==============================] - 0s 119us/sample - loss: 0.2979 - acc: 0.9583
Epoch 138/300
120/120 [==============================] - 0s 116us/sample - loss: 0.2963 - acc: 0.9583
Epoch 139/300
120/120 [==============================] - 0s 145us/sample - loss: 0.2950 - acc: 0.9583
Epoch 140/300
120/120 [==============================] - 0s 116us/sample - loss: 0.2931 - acc: 0.9583
Epoch 141/300
120/120 [==============================] - 0s 115us/sample - loss: 0.2921 - acc: 0.9583
Epoch 142/300
120/120 [==============================] - 0s 130us/sample - loss: 0.2900 - acc: 0.9583
Epoch 143/300
120/120 [==============================] - 0s 125us/sample - loss: 0.2887 - acc: 0.9583
Epoch 144/300
120/120 [==============================] - 0s 124us/sample - loss: 0.2892 - acc: 0.9583
Epoch 145/300
120/120 [==============================] - 0s 119us/sample - loss: 0.2862 - acc: 0.9583
Epoch 146/300
120/120 [==============================] - 0s 122us/sample - loss: 0.2843 - acc: 0.9583
Epoch 147/300
120/120 [==============================] - 0s 120us/sample - loss: 0.2830 - acc: 0.9583
Epoch 148/300
120/120 [==============================] - 0s 112us/sample - loss: 0.2810 - acc: 0.9583
Epoch 149/300
120/120 [==============================] - 0s 120us/sample - loss: 0.2795 - acc: 0.9583
Epoch 150/300
120/120 [==============================] - 0s 120us/sample - loss: 0.2798 - acc: 0.9583
Epoch 151/300
120/120 [==============================] - 0s 143us/sample - loss: 0.2763 - acc: 0.9583
Epoch 152/300
120/120 [==============================] - 0s 113us/sample - loss: 0.2754 - acc: 0.9583
Epoch 153/300
120/120 [==============================] - 0s 126us/sample - loss: 0.2745 - acc: 0.9583
Epoch 154/300
120/120 [==============================] - 0s 109us/sample - loss: 0.2722 - acc: 0.9583
Epoch 155/300
120/120 [==============================] - 0s 117us/sample - loss: 0.2710 - acc: 0.9583
Epoch 156/300
120/120 [==============================] - 0s 116us/sample - loss: 0.2698 - acc: 0.9583
Epoch 157/300
120/120 [==============================] - 0s 115us/sample - loss: 0.2693 - acc: 0.9583
Epoch 158/300
120/120 [==============================] - 0s 149us/sample - loss: 0.2669 - acc: 0.9583
Epoch 159/300
120/120 [==============================] - 0s 120us/sample - loss: 0.2655 - acc: 0.9583
Epoch 160/300
120/120 [==============================] - 0s 117us/sample - loss: 0.2657 - acc: 0.9583
Epoch 161/300
120/120 [==============================] - 0s 113us/sample - loss: 0.2631 - acc: 0.9583
Epoch 162/300
120/120 [==============================] - 0s 225us/sample - loss: 0.2612 - acc: 0.9667
Epoch 163/300
120/120 [==============================] - 0s 178us/sample - loss: 0.2607 - acc: 0.9583
Epoch 164/300
120/120 [==============================] - 0s 135us/sample - loss: 0.2586 - acc: 0.9667
Epoch 165/300
120/120 [==============================] - 0s 123us/sample - loss: 0.2574 - acc: 0.9667
Epoch 166/300
120/120 [==============================] - 0s 112us/sample - loss: 0.2560 - acc: 0.9667
Epoch 167/300
120/120 [==============================] - 0s 140us/sample - loss: 0.2551 - acc: 0.9667
Epoch 168/300
120/120 [==============================] - 0s 116us/sample - loss: 0.2535 - acc: 0.9667
Epoch 169/300
120/120 [==============================] - 0s 128us/sample - loss: 0.2521 - acc: 0.9667
Epoch 170/300
120/120 [==============================] - 0s 149us/sample - loss: 0.2509 - acc: 0.9667
Epoch 171/300
120/120 [==============================] - 0s 141us/sample - loss: 0.2497 - acc: 0.9667
Epoch 172/300
120/120 [==============================] - 0s 143us/sample - loss: 0.2484 - acc: 0.9667
Epoch 173/300
120/120 [==============================] - 0s 133us/sample - loss: 0.2475 - acc: 0.9667
Epoch 174/300
120/120 [==============================] - 0s 127us/sample - loss: 0.2468 - acc: 0.9583
Epoch 175/300
120/120 [==============================] - 0s 118us/sample - loss: 0.2449 - acc: 0.9667
Epoch 176/300
120/120 [==============================] - 0s 116us/sample - loss: 0.2434 - acc: 0.9667
Epoch 177/300
120/120 [==============================] - 0s 140us/sample - loss: 0.2423 - acc: 0.9667
Epoch 178/300
120/120 [==============================] - 0s 148us/sample - loss: 0.2410 - acc: 0.9667
Epoch 179/300
120/120 [==============================] - 0s 140us/sample - loss: 0.2398 - acc: 0.9667
Epoch 180/300
120/120 [==============================] - 0s 118us/sample - loss: 0.2385 - acc: 0.9667
Epoch 181/300
120/120 [==============================] - 0s 125us/sample - loss: 0.2372 - acc: 0.9667
Epoch 182/300
120/120 [==============================] - 0s 136us/sample - loss: 0.2360 - acc: 0.9667
Epoch 183/300
120/120 [==============================] - 0s 133us/sample - loss: 0.2359 - acc: 0.9667
Epoch 184/300
120/120 [==============================] - 0s 148us/sample - loss: 0.2337 - acc: 0.9667
Epoch 185/300
120/120 [==============================] - 0s 129us/sample - loss: 0.2325 - acc: 0.9667
Epoch 186/300
120/120 [==============================] - 0s 110us/sample - loss: 0.2314 - acc: 0.9667
Epoch 187/300
120/120 [==============================] - 0s 124us/sample - loss: 0.2306 - acc: 0.9667
Epoch 188/300
120/120 [==============================] - 0s 113us/sample - loss: 0.2292 - acc: 0.9667
Epoch 189/300
120/120 [==============================] - 0s 123us/sample - loss: 0.2279 - acc: 0.9667
Epoch 190/300
120/120 [==============================] - 0s 123us/sample - loss: 0.2270 - acc: 0.9667
Epoch 191/300
120/120 [==============================] - 0s 117us/sample - loss: 0.2264 - acc: 0.9667
Epoch 192/300
120/120 [==============================] - 0s 113us/sample - loss: 0.2257 - acc: 0.9667
Epoch 193/300
120/120 [==============================] - 0s 137us/sample - loss: 0.2236 - acc: 0.9750
Epoch 194/300
120/120 [==============================] - 0s 118us/sample - loss: 0.2226 - acc: 0.9750
Epoch 195/300
120/120 [==============================] - 0s 132us/sample - loss: 0.2214 - acc: 0.9750
Epoch 196/300
120/120 [==============================] - 0s 120us/sample - loss: 0.2201 - acc: 0.9667
Epoch 197/300
120/120 [==============================] - 0s 137us/sample - loss: 0.2195 - acc: 0.9667
Epoch 198/300
120/120 [==============================] - 0s 121us/sample - loss: 0.2181 - acc: 0.9667
Epoch 199/300
120/120 [==============================] - 0s 115us/sample - loss: 0.2175 - acc: 0.9667
Epoch 200/300
120/120 [==============================] - 0s 124us/sample - loss: 0.2159 - acc: 0.9667
Epoch 201/300
120/120 [==============================] - 0s 116us/sample - loss: 0.2152 - acc: 0.9750
Epoch 202/300
120/120 [==============================] - 0s 117us/sample - loss: 0.2138 - acc: 0.9750
Epoch 203/300
120/120 [==============================] - 0s 115us/sample - loss: 0.2146 - acc: 0.9667
Epoch 204/300
120/120 [==============================] - 0s 134us/sample - loss: 0.2122 - acc: 0.9667
Epoch 205/300
120/120 [==============================] - 0s 128us/sample - loss: 0.2113 - acc: 0.9667
Epoch 206/300
120/120 [==============================] - 0s 143us/sample - loss: 0.2098 - acc: 0.9750
Epoch 207/300
120/120 [==============================] - 0s 129us/sample - loss: 0.2088 - acc: 0.9667
Epoch 208/300
120/120 [==============================] - 0s 146us/sample - loss: 0.2076 - acc: 0.9667
Epoch 209/300
120/120 [==============================] - 0s 116us/sample - loss: 0.2069 - acc: 0.9667
Epoch 210/300
120/120 [==============================] - 0s 123us/sample - loss: 0.2057 - acc: 0.9750
Epoch 211/300
120/120 [==============================] - 0s 115us/sample - loss: 0.2048 - acc: 0.9667
Epoch 212/300
120/120 [==============================] - 0s 130us/sample - loss: 0.2040 - acc: 0.9750
Epoch 213/300
120/120 [==============================] - 0s 115us/sample - loss: 0.2030 - acc: 0.9750
Epoch 214/300
120/120 [==============================] - 0s 125us/sample - loss: 0.2017 - acc: 0.9750
Epoch 215/300
120/120 [==============================] - 0s 115us/sample - loss: 0.2010 - acc: 0.9750
Epoch 216/300
120/120 [==============================] - 0s 161us/sample - loss: 0.1999 - acc: 0.9750
Epoch 217/300
120/120 [==============================] - 0s 167us/sample - loss: 0.1989 - acc: 0.9750
Epoch 218/300
120/120 [==============================] - 0s 138us/sample - loss: 0.1988 - acc: 0.9667
Epoch 219/300
120/120 [==============================] - 0s 145us/sample - loss: 0.1975 - acc: 0.9667
Epoch 220/300
120/120 [==============================] - 0s 127us/sample - loss: 0.1970 - acc: 0.9750
Epoch 221/300
120/120 [==============================] - 0s 137us/sample - loss: 0.1951 - acc: 0.9750
Epoch 222/300
120/120 [==============================] - 0s 113us/sample - loss: 0.1943 - acc: 0.9750
Epoch 223/300
120/120 [==============================] - 0s 135us/sample - loss: 0.1955 - acc: 0.9667
Epoch 224/300
120/120 [==============================] - 0s 115us/sample - loss: 0.1932 - acc: 0.9667
Epoch 225/300
120/120 [==============================] - 0s 136us/sample - loss: 0.1920 - acc: 0.9750
Epoch 226/300
120/120 [==============================] - 0s 128us/sample - loss: 0.1914 - acc: 0.9750
Epoch 227/300
120/120 [==============================] - 0s 155us/sample - loss: 0.1908 - acc: 0.9750
Epoch 228/300
120/120 [==============================] - 0s 126us/sample - loss: 0.1890 - acc: 0.9750
Epoch 229/300
120/120 [==============================] - 0s 115us/sample - loss: 0.1888 - acc: 0.9750
Epoch 230/300
120/120 [==============================] - 0s 134us/sample - loss: 0.1878 - acc: 0.9750
Epoch 231/300
120/120 [==============================] - 0s 122us/sample - loss: 0.1866 - acc: 0.9750
Epoch 232/300
120/120 [==============================] - 0s 138us/sample - loss: 0.1858 - acc: 0.9750
Epoch 233/300
120/120 [==============================] - 0s 126us/sample - loss: 0.1848 - acc: 0.9750
Epoch 234/300
120/120 [==============================] - 0s 121us/sample - loss: 0.1840 - acc: 0.9750
Epoch 235/300
120/120 [==============================] - 0s 118us/sample - loss: 0.1830 - acc: 0.9750
Epoch 236/300
120/120 [==============================] - 0s 137us/sample - loss: 0.1822 - acc: 0.9750
Epoch 237/300
120/120 [==============================] - 0s 130us/sample - loss: 0.1813 - acc: 0.9750
Epoch 238/300
120/120 [==============================] - 0s 137us/sample - loss: 0.1808 - acc: 0.9750
Epoch 239/300
120/120 [==============================] - 0s 131us/sample - loss: 0.1809 - acc: 0.9750
Epoch 240/300
120/120 [==============================] - 0s 121us/sample - loss: 0.1789 - acc: 0.9750
Epoch 241/300
120/120 [==============================] - 0s 115us/sample - loss: 0.1783 - acc: 0.9750
Epoch 242/300
120/120 [==============================] - 0s 140us/sample - loss: 0.1777 - acc: 0.9750
Epoch 243/300
120/120 [==============================] - 0s 115us/sample - loss: 0.1771 - acc: 0.9750
Epoch 244/300
120/120 [==============================] - 0s 145us/sample - loss: 0.1760 - acc: 0.9750
Epoch 245/300
120/120 [==============================] - 0s 144us/sample - loss: 0.1758 - acc: 0.9750
Epoch 246/300
120/120 [==============================] - 0s 118us/sample - loss: 0.1753 - acc: 0.9750
Epoch 247/300
120/120 [==============================] - 0s 114us/sample - loss: 0.1738 - acc: 0.9750
Epoch 248/300
120/120 [==============================] - 0s 151us/sample - loss: 0.1729 - acc: 0.9750
Epoch 249/300
120/120 [==============================] - 0s 117us/sample - loss: 0.1720 - acc: 0.9750
Epoch 250/300
120/120 [==============================] - 0s 119us/sample - loss: 0.1712 - acc: 0.9750
Epoch 251/300
120/120 [==============================] - 0s 125us/sample - loss: 0.1704 - acc: 0.9750
Epoch 252/300
120/120 [==============================] - 0s 125us/sample - loss: 0.1699 - acc: 0.9750
Epoch 253/300
120/120 [==============================] - 0s 132us/sample - loss: 0.1697 - acc: 0.9750
Epoch 254/300
120/120 [==============================] - 0s 137us/sample - loss: 0.1691 - acc: 0.9750
Epoch 255/300
120/120 [==============================] - 0s 128us/sample - loss: 0.1682 - acc: 0.9750
Epoch 256/300
120/120 [==============================] - 0s 116us/sample - loss: 0.1674 - acc: 0.9750
Epoch 257/300
120/120 [==============================] - 0s 128us/sample - loss: 0.1663 - acc: 0.9750
Epoch 258/300
120/120 [==============================] - 0s 128us/sample - loss: 0.1657 - acc: 0.9750
Epoch 259/300
120/120 [==============================] - 0s 154us/sample - loss: 0.1650 - acc: 0.9750
Epoch 260/300
120/120 [==============================] - 0s 130us/sample - loss: 0.1651 - acc: 0.9750
Epoch 261/300
120/120 [==============================] - 0s 138us/sample - loss: 0.1635 - acc: 0.9750
Epoch 262/300
120/120 [==============================] - 0s 121us/sample - loss: 0.1630 - acc: 0.9750
Epoch 263/300
120/120 [==============================] - 0s 117us/sample - loss: 0.1622 - acc: 0.9750
Epoch 264/300
120/120 [==============================] - 0s 139us/sample - loss: 0.1618 - acc: 0.9750
Epoch 265/300
120/120 [==============================] - 0s 124us/sample - loss: 0.1607 - acc: 0.9750
Epoch 266/300
120/120 [==============================] - 0s 134us/sample - loss: 0.1604 - acc: 0.9750
Epoch 267/300
120/120 [==============================] - 0s 119us/sample - loss: 0.1596 - acc: 0.9750
Epoch 268/300
120/120 [==============================] - 0s 120us/sample - loss: 0.1587 - acc: 0.9750
Epoch 269/300
120/120 [==============================] - 0s 160us/sample - loss: 0.1581 - acc: 0.9750
Epoch 270/300
120/120 [==============================] - 0s 195us/sample - loss: 0.1581 - acc: 0.9750
Epoch 271/300
120/120 [==============================] - 0s 165us/sample - loss: 0.1582 - acc: 0.9750
Epoch 272/300
120/120 [==============================] - 0s 175us/sample - loss: 0.1564 - acc: 0.9750
Epoch 273/300
120/120 [==============================] - 0s 125us/sample - loss: 0.1560 - acc: 0.9750
Epoch 274/300
120/120 [==============================] - 0s 144us/sample - loss: 0.1556 - acc: 0.9750
Epoch 275/300
120/120 [==============================] - 0s 125us/sample - loss: 0.1546 - acc: 0.9750
Epoch 276/300
120/120 [==============================] - 0s 142us/sample - loss: 0.1540 - acc: 0.9750
Epoch 277/300
120/120 [==============================] - 0s 115us/sample - loss: 0.1544 - acc: 0.9750
Epoch 278/300
120/120 [==============================] - 0s 174us/sample - loss: 0.1528 - acc: 0.9750
Epoch 279/300
120/120 [==============================] - 0s 136us/sample - loss: 0.1519 - acc: 0.9750
Epoch 280/300
120/120 [==============================] - 0s 121us/sample - loss: 0.1515 - acc: 0.9750
Epoch 281/300
120/120 [==============================] - 0s 124us/sample - loss: 0.1510 - acc: 0.9750
Epoch 282/300
120/120 [==============================] - 0s 124us/sample - loss: 0.1509 - acc: 0.9750
Epoch 283/300
120/120 [==============================] - 0s 146us/sample - loss: 0.1498 - acc: 0.9750
Epoch 284/300
120/120 [==============================] - 0s 133us/sample - loss: 0.1497 - acc: 0.9750
Epoch 285/300
120/120 [==============================] - 0s 116us/sample - loss: 0.1491 - acc: 0.9750
Epoch 286/300
120/120 [==============================] - 0s 127us/sample - loss: 0.1483 - acc: 0.9750
Epoch 287/300
120/120 [==============================] - 0s 132us/sample - loss: 0.1487 - acc: 0.9833
Epoch 288/300
120/120 [==============================] - 0s 156us/sample - loss: 0.1482 - acc: 0.9833
Epoch 289/300
120/120 [==============================] - 0s 156us/sample - loss: 0.1459 - acc: 0.9750
Epoch 290/300
120/120 [==============================] - 0s 143us/sample - loss: 0.1460 - acc: 0.9750
Epoch 291/300
120/120 [==============================] - 0s 121us/sample - loss: 0.1459 - acc: 0.9750
Epoch 292/300
120/120 [==============================] - 0s 134us/sample - loss: 0.1457 - acc: 0.9750
Epoch 293/300
120/120 [==============================] - 0s 133us/sample - loss: 0.1440 - acc: 0.9750
Epoch 294/300
120/120 [==============================] - 0s 134us/sample - loss: 0.1436 - acc: 0.9750
Epoch 295/300
120/120 [==============================] - 0s 128us/sample - loss: 0.1437 - acc: 0.9750
Epoch 296/300
120/120 [==============================] - 0s 134us/sample - loss: 0.1427 - acc: 0.9750
Epoch 297/300
120/120 [==============================] - 0s 135us/sample - loss: 0.1423 - acc: 0.9750
Epoch 298/300
120/120 [==============================] - 0s 135us/sample - loss: 0.1414 - acc: 0.9750
Epoch 299/300
120/120 [==============================] - 0s 124us/sample - loss: 0.1414 - acc: 0.9750
Epoch 300/300
120/120 [==============================] - 0s 121us/sample - loss: 0.1419 - acc: 0.9750
Out[ ]:
<tensorflow.python.keras.callbacks.History at 0x7f870337bef0>
In [ ]:
# モデルの評価
score = model.evaluate(x_test, y_test, verbose=1)
print("正解率=", str(score[1]), "loss=",score[0])
30/30 [==============================] - 0s 84us/sample - loss: 0.1787 - acc: 0.9333
正解率= 0.93333334 loss= 0.17866167426109314
In [ ]:
score
Out[ ]:
[0.17866167426109314, 0.93333334]

手書き数字の判定

  • kearasは、tensorflow2.2以上が必要なので、tensorflowのバージョンを最新にする
In [ ]:
# !pip install --upgrade tensorflow
In [ ]:
import tensorflow as tf
import keras
from keras.datasets import mnist

import pandas as pd
import numpy as np

from matplotlib import pyplot
In [ ]:
print(tf.__version__)
2.3.0
In [ ]:
# MNISTのデータ読み込み
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# データを4*8で出力
for i in range(32):
    pyplot.subplot(4, 8, i+1)
    pyplot.imshow(x_train[i], cmap='gray')
pyplot.show()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
In [ ]:
x_train[0].shape
Out[ ]:
(28, 28)
In [ ]:
np.max(x_train[0])
Out[ ]:
255
In [ ]:
# 画像1つが二次元なので、28×28で784の一次元配列に変換。
# また、データの範囲を0.0~1.0の範囲に正規化する必要があるので、色の最大値255で割る

# 学習データを28*28=784の一次元配列に変換し、正規化
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test= x_test.reshape(-1, 784).astype('float32') / 255
In [ ]:
print(x_train[0].shape)
print(np.max(x_train[0]))
(784,)
1.0
In [ ]:
x_test
Out[ ]:
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)
In [ ]:
# 目的変数をone-hotベクトル化
print("目的変数:", y_train[:10], "・・・")

# kerasでone-hot化
y_train = keras.utils.to_categorical(y_train.astype('int32'), 10)
y_test = keras.utils.to_categorical(y_test.astype('int32'), 10)
目的変数: [5 0 4 1 9 2 1 3 1 4] ・・・
In [ ]:
y_train[:2]
Out[ ]:
array([[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
       [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
In [ ]:
y_test[:2]
Out[ ]:
array([[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
       [0., 0., 1., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
In [ ]:
# kerasでモデル構築

# 入力と出力を指定
in_size = 28 * 28
out_size = 10

# モデル定義
Dense = keras.layers.Dense
model = keras.models.Sequential()
model.add(Dense(512, activation='relu', input_shape=(in_size,)))
model.add(Dense(out_size, activation='softmax'))
In [ ]:
# モデル構築
model.compile(
    loss='categorical_crossentropy',
    optimizer='adam',
    metrics=['accuracy']
)
In [ ]:
# 学習
model.fit(x_train, y_train, batch_size=20, epochs=20)
Epoch 1/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.1897 - accuracy: 0.9445
Epoch 2/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0795 - accuracy: 0.9759
Epoch 3/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0514 - accuracy: 0.9837
Epoch 4/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0365 - accuracy: 0.9881
Epoch 5/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0284 - accuracy: 0.9909
Epoch 6/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0221 - accuracy: 0.9924
Epoch 7/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0180 - accuracy: 0.9941
Epoch 8/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0159 - accuracy: 0.9949
Epoch 9/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0134 - accuracy: 0.9955
Epoch 10/20
3000/3000 [==============================] - 8s 3ms/step - loss: 0.0127 - accuracy: 0.9957
Epoch 11/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0113 - accuracy: 0.9960
Epoch 12/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0119 - accuracy: 0.9961
Epoch 13/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0104 - accuracy: 0.9969
Epoch 14/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0112 - accuracy: 0.9964
Epoch 15/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0068 - accuracy: 0.9979
Epoch 16/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0089 - accuracy: 0.9972
Epoch 17/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0090 - accuracy: 0.9974
Epoch 18/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0066 - accuracy: 0.9981
Epoch 19/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0100 - accuracy: 0.9970
Epoch 20/20
3000/3000 [==============================] - 7s 2ms/step - loss: 0.0063 - accuracy: 0.9981
Out[ ]:
<tensorflow.python.keras.callbacks.History at 0x7f5fdc66e160>
In [ ]:
# 評価
score = model.evaluate(x_test, y_test, verbose=1)
print("正解率=", score[1], 'loss=', score[0])
313/313 [==============================] - 1s 2ms/step - loss: 0.1575 - accuracy: 0.9812
正解率= 0.9811999797821045 loss= 0.15745872259140015

MLPでMNISTの分類

  • 多層パーセプトロン(MLP)の利用
In [ ]:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop
from keras.datasets import mnist
import matplotlib.pyplot as plt
In [ ]:
# 入力と出力の指定
in_size = 28 * 28
out_size = 10

# MNISTのデータ読み込み
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# データを28*28=784の一次元配列に変換し、正規化(最小0、最大1)
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255

# ターゲット変数をone-hotベクトル化
y_train = keras.utils.to_categorical(y_train.astype('int32'), 10)
y_test = keras.utils.to_categorical(y_test.astype('int32'), 10)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
In [ ]:
# MLPモデルの定義
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(in_size,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(out_size, activation='softmax'))
In [ ]:
# モデルのコンパイル
model.compile(
    loss='categorical_crossentropy',
    optimizer='RMSprop',
    metrics=['accuracy']
)
In [ ]:
# 学習
hist = model.fit(
        x_train, y_train,
        batch_size=128,
        epochs=50,
        verbose=1,
        validation_data=(x_test, y_test)
        )
Epoch 1/50
469/469 [==============================] - 2s 3ms/step - loss: 0.2449 - accuracy: 0.9247 - val_loss: 0.1257 - val_accuracy: 0.9621
Epoch 2/50
469/469 [==============================] - 1s 3ms/step - loss: 0.1010 - accuracy: 0.9691 - val_loss: 0.0989 - val_accuracy: 0.9704
Epoch 3/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0761 - accuracy: 0.9766 - val_loss: 0.0715 - val_accuracy: 0.9787
Epoch 4/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0581 - accuracy: 0.9822 - val_loss: 0.0914 - val_accuracy: 0.9730
Epoch 5/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0516 - accuracy: 0.9834 - val_loss: 0.0737 - val_accuracy: 0.9805
Epoch 6/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0443 - accuracy: 0.9864 - val_loss: 0.0841 - val_accuracy: 0.9812
Epoch 7/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0370 - accuracy: 0.9892 - val_loss: 0.0774 - val_accuracy: 0.9827
Epoch 8/50
469/469 [==============================] - 2s 3ms/step - loss: 0.0352 - accuracy: 0.9893 - val_loss: 0.0925 - val_accuracy: 0.9808
Epoch 9/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0298 - accuracy: 0.9915 - val_loss: 0.0907 - val_accuracy: 0.9819
Epoch 10/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0281 - accuracy: 0.9916 - val_loss: 0.0978 - val_accuracy: 0.9827
Epoch 11/50
469/469 [==============================] - 2s 3ms/step - loss: 0.0267 - accuracy: 0.9920 - val_loss: 0.0973 - val_accuracy: 0.9833
Epoch 12/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0238 - accuracy: 0.9932 - val_loss: 0.1010 - val_accuracy: 0.9837
Epoch 13/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0237 - accuracy: 0.9934 - val_loss: 0.1059 - val_accuracy: 0.9831
Epoch 14/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0215 - accuracy: 0.9938 - val_loss: 0.1035 - val_accuracy: 0.9838
Epoch 15/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0207 - accuracy: 0.9944 - val_loss: 0.1222 - val_accuracy: 0.9816
Epoch 16/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0188 - accuracy: 0.9947 - val_loss: 0.1181 - val_accuracy: 0.9837
Epoch 17/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0199 - accuracy: 0.9944 - val_loss: 0.1201 - val_accuracy: 0.9836
Epoch 18/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0187 - accuracy: 0.9949 - val_loss: 0.1288 - val_accuracy: 0.9833
Epoch 19/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0180 - accuracy: 0.9951 - val_loss: 0.1405 - val_accuracy: 0.9823
Epoch 20/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0147 - accuracy: 0.9957 - val_loss: 0.1428 - val_accuracy: 0.9819
Epoch 21/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0175 - accuracy: 0.9956 - val_loss: 0.1334 - val_accuracy: 0.9831
Epoch 22/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0162 - accuracy: 0.9956 - val_loss: 0.1397 - val_accuracy: 0.9831
Epoch 23/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0151 - accuracy: 0.9959 - val_loss: 0.1433 - val_accuracy: 0.9834
Epoch 24/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0148 - accuracy: 0.9963 - val_loss: 0.1478 - val_accuracy: 0.9832
Epoch 25/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0157 - accuracy: 0.9959 - val_loss: 0.1463 - val_accuracy: 0.9827
Epoch 26/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0151 - accuracy: 0.9962 - val_loss: 0.1332 - val_accuracy: 0.9840
Epoch 27/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0147 - accuracy: 0.9963 - val_loss: 0.1485 - val_accuracy: 0.9830
Epoch 28/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0143 - accuracy: 0.9965 - val_loss: 0.1507 - val_accuracy: 0.9843
Epoch 29/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0137 - accuracy: 0.9963 - val_loss: 0.1352 - val_accuracy: 0.9847
Epoch 30/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0139 - accuracy: 0.9966 - val_loss: 0.1611 - val_accuracy: 0.9829
Epoch 31/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0124 - accuracy: 0.9968 - val_loss: 0.1368 - val_accuracy: 0.9847
Epoch 32/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0145 - accuracy: 0.9967 - val_loss: 0.1445 - val_accuracy: 0.9835
Epoch 33/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0138 - accuracy: 0.9967 - val_loss: 0.1792 - val_accuracy: 0.9845
Epoch 34/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0151 - accuracy: 0.9969 - val_loss: 0.1521 - val_accuracy: 0.9845
Epoch 35/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0146 - accuracy: 0.9966 - val_loss: 0.1503 - val_accuracy: 0.9847
Epoch 36/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0121 - accuracy: 0.9973 - val_loss: 0.1611 - val_accuracy: 0.9840
Epoch 37/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0155 - accuracy: 0.9967 - val_loss: 0.1596 - val_accuracy: 0.9847
Epoch 38/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0140 - accuracy: 0.9970 - val_loss: 0.1700 - val_accuracy: 0.9839
Epoch 39/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0129 - accuracy: 0.9971 - val_loss: 0.1596 - val_accuracy: 0.9848
Epoch 40/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0129 - accuracy: 0.9972 - val_loss: 0.1954 - val_accuracy: 0.9811
Epoch 41/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0134 - accuracy: 0.9975 - val_loss: 0.1971 - val_accuracy: 0.9839
Epoch 42/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0138 - accuracy: 0.9971 - val_loss: 0.1769 - val_accuracy: 0.9836
Epoch 43/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0122 - accuracy: 0.9975 - val_loss: 0.1949 - val_accuracy: 0.9839
Epoch 44/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0145 - accuracy: 0.9973 - val_loss: 0.1929 - val_accuracy: 0.9839
Epoch 45/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0119 - accuracy: 0.9976 - val_loss: 0.2215 - val_accuracy: 0.9828
Epoch 46/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0129 - accuracy: 0.9973 - val_loss: 0.1927 - val_accuracy: 0.9842
Epoch 47/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0111 - accuracy: 0.9974 - val_loss: 0.1928 - val_accuracy: 0.9841
Epoch 48/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0138 - accuracy: 0.9972 - val_loss: 0.1871 - val_accuracy: 0.9864
Epoch 49/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0124 - accuracy: 0.9974 - val_loss: 0.1734 - val_accuracy: 0.9846
Epoch 50/50
469/469 [==============================] - 1s 3ms/step - loss: 0.0104 - accuracy: 0.9977 - val_loss: 0.2148 - val_accuracy: 0.9850
In [ ]:
# モデルの評価
score = model.evaluate(x_test, y_test, verbose=1)
print("正解率=", score[1], "loss=", score[0])
313/313 [==============================] - 1s 2ms/step - loss: 0.2148 - accuracy: 0.9850
正解率= 0.9850000143051147 loss= 0.21476562321186066
In [ ]:
# 学習の様子をグラフ化

# 正解率の推移
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title('Accuracy')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [ ]:
# ロスの推移
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Loss')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

CNN(畳み込みニューラルネットワーク)の適用

  • 上記の予測モデルをCNNで作成
In [ ]:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.optimizers import RMSprop
from keras.datasets import mnist
import matplotlib.pyplot as plt
In [ ]:
# 入力と出力を指定
im_rows = 28 # 画像の縦ピクセルサイズ
im_cols = 28 # 画像の横ピクセルサイズ
im_color = 1 # 画像の色空間/グレイスケール
in_shape = (im_rows, im_cols, im_color) # 入力サイズ
out_size = 10 # 出力サイズ(1行10列)
In [ ]:
# MNISTのデータ読み込み
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train[0].shape
Out[ ]:
(28, 28)
In [ ]:
x_train.shape
Out[ ]:
(60000, 28, 28)
In [ ]:
y_train.shape
Out[ ]:
(60000,)
In [ ]:
# 読み込んだ各データを2次元から3次元配列に変換後、正規化
# (今までは、各画像データを1次元に変換していたが、ここでは、3次元に変換する)
x_train = x_train.reshape(-1, im_rows, im_cols, im_color)
x_train = x_train.astype('float32') / 255
x_test = x_test.reshape(-1, im_rows, im_cols, im_color)
x_test = x_test.astype('float32') / 255
In [ ]:
x_train.shape
Out[ ]:
(60000, 28, 28, 1)
In [ ]:
x_train[0][:2]
Out[ ]:
array([[[0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.]],

       [[0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.],
        [0.]]], dtype=float32)
In [ ]:
# ターゲット変数をone-hotベクトル化
y_train = keras.utils.to_categorical(y_train.astype('int32'), 10)
y_test = keras.utils.to_categorical(y_test.astype('int32'), 10)
In [ ]:
# CNNモデルを定義
model = Sequential()
model.add(Conv2D(
            32,
            kernel_size=(3, 3),
            activation='relu',
            input_shape=in_shape
            ))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(out_size, activation='softmax'))
In [ ]:
# モデルのコンパイル
model.compile(
    loss='categorical_crossentropy',
    optimizer=RMSprop(),
    metrics=['accuracy']
)
In [ ]:
# 学習
hist = model.fit(
        x_train, y_train,
        batch_size=128,
        epochs=12,
        verbose=1,
        validation_data=(x_test, y_test)
)
Epoch 1/12
469/469 [==============================] - 3s 6ms/step - loss: 0.2260 - accuracy: 0.9322 - val_loss: 0.0493 - val_accuracy: 0.9836
Epoch 2/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0797 - accuracy: 0.9769 - val_loss: 0.0500 - val_accuracy: 0.9819
Epoch 3/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0614 - accuracy: 0.9820 - val_loss: 0.0417 - val_accuracy: 0.9863
Epoch 4/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0523 - accuracy: 0.9847 - val_loss: 0.0319 - val_accuracy: 0.9891
Epoch 5/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0476 - accuracy: 0.9865 - val_loss: 0.0330 - val_accuracy: 0.9887
Epoch 6/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0457 - accuracy: 0.9869 - val_loss: 0.0325 - val_accuracy: 0.9909
Epoch 7/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0460 - accuracy: 0.9865 - val_loss: 0.0310 - val_accuracy: 0.9895
Epoch 8/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0466 - accuracy: 0.9870 - val_loss: 0.0384 - val_accuracy: 0.9892
Epoch 9/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0475 - accuracy: 0.9868 - val_loss: 0.0400 - val_accuracy: 0.9894
Epoch 10/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0492 - accuracy: 0.9865 - val_loss: 0.0370 - val_accuracy: 0.9888
Epoch 11/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0498 - accuracy: 0.9862 - val_loss: 0.0443 - val_accuracy: 0.9886
Epoch 12/12
469/469 [==============================] - 3s 6ms/step - loss: 0.0475 - accuracy: 0.9870 - val_loss: 0.0340 - val_accuracy: 0.9908
In [ ]:
# モデルの評価
score = model.evaluate(x_test, y_test, verbose=1)
print("正解率=", score[1], "loss=", score[0])
313/313 [==============================] - 1s 2ms/step - loss: 0.0340 - accuracy: 0.9908
正解率= 0.9908000230789185 loss= 0.03403092548251152
In [ ]:
# 学習推移のプロット

# 正解率
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title('Accuracy')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [ ]:
# ロスの推移
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Loss')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

写真の物体認識

In [ ]:
# 対象写真データ確認
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 6s 0us/step
In [ ]:
from PIL import Image
In [ ]:
plt.figure(figsize=(10, 10))
labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(40):
    im = Image.fromarray(x_train[i])
    plt.subplot(5, 8, i+1)
    plt.title(labels[y_train[i][0]])
    plt.tick_params(labelbottom="off", bottom="off") # x軸をoff
    plt.tick_params(labelleft="off", left="off") # y軸をoff
    plt.imshow(im)
plt.show()
In [ ]:
x_train.shape
Out[ ]:
(50000, 32, 32, 3)
In [ ]:
x_train[0]
Out[ ]:
array([[[ 59,  62,  63],
        [ 43,  46,  45],
        [ 50,  48,  43],
        ...,
        [158, 132, 108],
        [152, 125, 102],
        [148, 124, 103]],

       [[ 16,  20,  20],
        [  0,   0,   0],
        [ 18,   8,   0],
        ...,
        [123,  88,  55],
        [119,  83,  50],
        [122,  87,  57]],

       [[ 25,  24,  21],
        [ 16,   7,   0],
        [ 49,  27,   8],
        ...,
        [118,  84,  50],
        [120,  84,  50],
        [109,  73,  42]],

       ...,

       [[208, 170,  96],
        [201, 153,  34],
        [198, 161,  26],
        ...,
        [160, 133,  70],
        [ 56,  31,   7],
        [ 53,  34,  20]],

       [[180, 139,  96],
        [173, 123,  42],
        [186, 144,  30],
        ...,
        [184, 148,  94],
        [ 97,  62,  34],
        [ 83,  53,  34]],

       [[177, 144, 116],
        [168, 129,  94],
        [179, 142,  87],
        ...,
        [216, 184, 140],
        [151, 118,  84],
        [123,  92,  72]]], dtype=uint8)

実装

In [ ]:
import matplotlib.pyplot as plt
import keras
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout
In [ ]:
num_classes = 10
im_rows = 32
im_cols = 32
im_size = im_rows * im_cols * 3
In [ ]:
# データ読み込み
(x_train, y_train), (x_test, y_test) =cifar10.load_data()
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 6s 0us/step
In [ ]:
# データを1次元配列に変換後、正規化
x_train = x_train.reshape(-1, im_size).astype('float32') / 255
x_test = x_test.reshape(-1, im_size).astype('float32') / 255

# ターゲット変数をone-hotベクトル化
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
In [ ]:
x_train[0]
Out[ ]:
array([0.23137255, 0.24313726, 0.24705882, ..., 0.48235294, 0.36078432,
       0.28235295], dtype=float32)
In [ ]:
y_train
Out[ ]:
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 1.],
       [0., 0., 0., ..., 0., 0., 1.],
       ...,
       [0., 0., 0., ..., 0., 0., 1.],
       [0., 1., 0., ..., 0., 0., 0.],
       [0., 1., 0., ..., 0., 0., 0.]], dtype=float32)
In [ ]:
y_train.shape
Out[ ]:
(50000, 10)
In [ ]:
# モデルの定義
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(im_size,)))
model.add(Dense(num_classes, activation='softmax'))
In [ ]:
# モデルのコンパイル
model.compile(
    loss='categorical_crossentropy',
    optimizer='adam',
    metrics=['accuracy']
)
In [ ]:
# 学習
hist = model.fit(
    x_train, y_train,
    batch_size=32,
    epochs=50,
    verbose=1,
    validation_data=(x_test, y_test)
)
Epoch 1/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.9000 - accuracy: 0.3221 - val_loss: 1.7999 - val_accuracy: 0.3418
Epoch 2/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.7169 - accuracy: 0.3846 - val_loss: 1.6430 - val_accuracy: 0.4158
Epoch 3/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.6544 - accuracy: 0.4089 - val_loss: 1.6997 - val_accuracy: 0.3872
Epoch 4/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.6048 - accuracy: 0.4262 - val_loss: 1.6433 - val_accuracy: 0.4165
Epoch 5/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.5759 - accuracy: 0.4369 - val_loss: 1.6016 - val_accuracy: 0.4276
Epoch 6/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.5567 - accuracy: 0.4429 - val_loss: 1.5843 - val_accuracy: 0.4350
Epoch 7/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.5350 - accuracy: 0.4532 - val_loss: 1.5566 - val_accuracy: 0.4426
Epoch 8/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.5177 - accuracy: 0.4605 - val_loss: 1.5968 - val_accuracy: 0.4337
Epoch 9/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.5032 - accuracy: 0.4647 - val_loss: 1.5326 - val_accuracy: 0.4510
Epoch 10/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4954 - accuracy: 0.4687 - val_loss: 1.5320 - val_accuracy: 0.4556
Epoch 11/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4847 - accuracy: 0.4699 - val_loss: 1.5097 - val_accuracy: 0.4625
Epoch 12/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4726 - accuracy: 0.4770 - val_loss: 1.5091 - val_accuracy: 0.4681
Epoch 13/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4667 - accuracy: 0.4771 - val_loss: 1.5056 - val_accuracy: 0.4700
Epoch 14/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4557 - accuracy: 0.4805 - val_loss: 1.5283 - val_accuracy: 0.4593
Epoch 15/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4445 - accuracy: 0.4870 - val_loss: 1.4975 - val_accuracy: 0.4679
Epoch 16/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4396 - accuracy: 0.4872 - val_loss: 1.5273 - val_accuracy: 0.4627
Epoch 17/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4359 - accuracy: 0.4902 - val_loss: 1.5781 - val_accuracy: 0.4440
Epoch 18/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4258 - accuracy: 0.4936 - val_loss: 1.5028 - val_accuracy: 0.4679
Epoch 19/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4221 - accuracy: 0.4950 - val_loss: 1.4871 - val_accuracy: 0.4790
Epoch 20/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4127 - accuracy: 0.4989 - val_loss: 1.5266 - val_accuracy: 0.4565
Epoch 21/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4088 - accuracy: 0.4977 - val_loss: 1.4969 - val_accuracy: 0.4696
Epoch 22/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4054 - accuracy: 0.5007 - val_loss: 1.5957 - val_accuracy: 0.4371
Epoch 23/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.4010 - accuracy: 0.5009 - val_loss: 1.5219 - val_accuracy: 0.4674
Epoch 24/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3956 - accuracy: 0.5037 - val_loss: 1.5058 - val_accuracy: 0.4608
Epoch 25/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3920 - accuracy: 0.5063 - val_loss: 1.5297 - val_accuracy: 0.4572
Epoch 26/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3874 - accuracy: 0.5088 - val_loss: 1.4975 - val_accuracy: 0.4662
Epoch 27/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3785 - accuracy: 0.5099 - val_loss: 1.5055 - val_accuracy: 0.4698
Epoch 28/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3794 - accuracy: 0.5080 - val_loss: 1.5196 - val_accuracy: 0.4602
Epoch 29/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3760 - accuracy: 0.5104 - val_loss: 1.5295 - val_accuracy: 0.4669
Epoch 30/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3695 - accuracy: 0.5140 - val_loss: 1.6364 - val_accuracy: 0.4352
Epoch 31/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3715 - accuracy: 0.5132 - val_loss: 1.5157 - val_accuracy: 0.4677
Epoch 32/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3625 - accuracy: 0.5163 - val_loss: 1.5312 - val_accuracy: 0.4630
Epoch 33/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3618 - accuracy: 0.5142 - val_loss: 1.5116 - val_accuracy: 0.4683
Epoch 34/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3578 - accuracy: 0.5164 - val_loss: 1.4904 - val_accuracy: 0.4724
Epoch 35/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3544 - accuracy: 0.5182 - val_loss: 1.5026 - val_accuracy: 0.4720
Epoch 36/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3531 - accuracy: 0.5175 - val_loss: 1.5457 - val_accuracy: 0.4582
Epoch 37/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3492 - accuracy: 0.5199 - val_loss: 1.4991 - val_accuracy: 0.4659
Epoch 38/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3469 - accuracy: 0.5205 - val_loss: 1.4897 - val_accuracy: 0.4817
Epoch 39/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3417 - accuracy: 0.5230 - val_loss: 1.5386 - val_accuracy: 0.4578
Epoch 40/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3411 - accuracy: 0.5233 - val_loss: 1.4951 - val_accuracy: 0.4804
Epoch 41/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3394 - accuracy: 0.5244 - val_loss: 1.5264 - val_accuracy: 0.4691
Epoch 42/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3373 - accuracy: 0.5230 - val_loss: 1.5412 - val_accuracy: 0.4635
Epoch 43/50
1563/1563 [==============================] - 5s 3ms/step - loss: 1.3327 - accuracy: 0.5262 - val_loss: 1.5496 - val_accuracy: 0.4640
Epoch 44/50
1563/1563 [==============================] - 5s 3ms/step - loss: 1.3283 - accuracy: 0.5283 - val_loss: 1.5206 - val_accuracy: 0.4679
Epoch 45/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3253 - accuracy: 0.5260 - val_loss: 1.4878 - val_accuracy: 0.4827
Epoch 46/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3209 - accuracy: 0.5283 - val_loss: 1.5069 - val_accuracy: 0.4694
Epoch 47/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3238 - accuracy: 0.5295 - val_loss: 1.5342 - val_accuracy: 0.4630
Epoch 48/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3175 - accuracy: 0.5305 - val_loss: 1.5146 - val_accuracy: 0.4678
Epoch 49/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3164 - accuracy: 0.5301 - val_loss: 1.5245 - val_accuracy: 0.4687
Epoch 50/50
1563/1563 [==============================] - 4s 3ms/step - loss: 1.3160 - accuracy: 0.5305 - val_loss: 1.5360 - val_accuracy: 0.4708
In [ ]:
# 評価
score = model.evaluate(x_test, y_test, verbose=1)
print("正解率=", score[1], "loss=", score[0])
313/313 [==============================] - 1s 2ms/step - loss: 1.5360 - accuracy: 0.4708
正解率= 0.4708000123500824 loss= 1.5360416173934937
In [ ]:
# 学習推移の可視化

# Accuracy
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title("Accuracy")
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [ ]:
# loss
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title("Loss")
plt.legend(['train', 'test'], loc='upper left')
plt.show()

上記の予測をCNNで実施

In [ ]:
import matplotlib.pyplot as plt
import keras
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
In [ ]:
# 入力データと出力データの次元数セット
num_classes = 10
im_rows = 32
im_cols = 32
in_shape = (im_rows, im_cols, 3)
In [ ]:
# データ読み込み
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 11s 0us/step
In [ ]:
print(x_train.shape)
print(x_test.shape)
(50000, 32, 32, 3)
(10000, 32, 32, 3)
In [ ]:
# 学習データの正規化
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255

# ターゲット変数をone-hotベクトル化
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
In [ ]:
# モデル定義

model = Sequential()
model.add(Conv2D(32, (3,3), padding='same', input_shape=in_shape))
model.add(Activation('relu'))
model.add(Conv2D(32, (3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3,3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3,3)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
In [ ]:
# モデルのコンパイル
model.compile(
    loss='categorical_crossentropy',
    optimizer='adam',
    metrics=['accuracy']
)
In [ ]:
# 学習
hist = model.fit(
    x_train, y_train,
    batch_size=32,
    epochs=50,
    verbose=1,
    validation_data=(x_test, y_test)
)
Epoch 1/50
1563/1563 [==============================] - 8s 5ms/step - loss: 1.4042 - accuracy: 0.4898 - val_loss: 1.0820 - val_accuracy: 0.6184
Epoch 2/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.0390 - accuracy: 0.6336 - val_loss: 0.9465 - val_accuracy: 0.6697
Epoch 3/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9062 - accuracy: 0.6804 - val_loss: 0.8407 - val_accuracy: 0.7102
Epoch 4/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8308 - accuracy: 0.7088 - val_loss: 0.7528 - val_accuracy: 0.7358
Epoch 5/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.7661 - accuracy: 0.7305 - val_loss: 0.7379 - val_accuracy: 0.7484
Epoch 6/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.7221 - accuracy: 0.7489 - val_loss: 0.7582 - val_accuracy: 0.7424
Epoch 7/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.6864 - accuracy: 0.7579 - val_loss: 0.6939 - val_accuracy: 0.7681
Epoch 8/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.6565 - accuracy: 0.7691 - val_loss: 0.6875 - val_accuracy: 0.7705
Epoch 9/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.6341 - accuracy: 0.7791 - val_loss: 0.7259 - val_accuracy: 0.7667
Epoch 10/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.6097 - accuracy: 0.7839 - val_loss: 0.7834 - val_accuracy: 0.7633
Epoch 11/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5946 - accuracy: 0.7929 - val_loss: 0.6939 - val_accuracy: 0.7709
Epoch 12/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5733 - accuracy: 0.8004 - val_loss: 0.6997 - val_accuracy: 0.7634
Epoch 13/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5611 - accuracy: 0.8054 - val_loss: 0.6987 - val_accuracy: 0.7653
Epoch 14/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5461 - accuracy: 0.8087 - val_loss: 0.6908 - val_accuracy: 0.7785
Epoch 15/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5304 - accuracy: 0.8149 - val_loss: 0.6926 - val_accuracy: 0.7764
Epoch 16/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5173 - accuracy: 0.8202 - val_loss: 0.7070 - val_accuracy: 0.7738
Epoch 17/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5193 - accuracy: 0.8191 - val_loss: 0.7351 - val_accuracy: 0.7781
Epoch 18/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.5016 - accuracy: 0.8219 - val_loss: 0.6981 - val_accuracy: 0.7744
Epoch 19/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4994 - accuracy: 0.8240 - val_loss: 0.7147 - val_accuracy: 0.7754
Epoch 20/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4833 - accuracy: 0.8314 - val_loss: 0.7154 - val_accuracy: 0.7854
Epoch 21/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4902 - accuracy: 0.8311 - val_loss: 0.8038 - val_accuracy: 0.7761
Epoch 22/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4792 - accuracy: 0.8331 - val_loss: 0.7131 - val_accuracy: 0.7728
Epoch 23/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4728 - accuracy: 0.8350 - val_loss: 0.7044 - val_accuracy: 0.7806
Epoch 24/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4636 - accuracy: 0.8390 - val_loss: 0.7353 - val_accuracy: 0.7837
Epoch 25/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4561 - accuracy: 0.8429 - val_loss: 0.7175 - val_accuracy: 0.7864
Epoch 26/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4580 - accuracy: 0.8406 - val_loss: 0.7114 - val_accuracy: 0.7837
Epoch 27/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4511 - accuracy: 0.8431 - val_loss: 0.6879 - val_accuracy: 0.7870
Epoch 28/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4432 - accuracy: 0.8483 - val_loss: 0.7176 - val_accuracy: 0.7806
Epoch 29/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4460 - accuracy: 0.8452 - val_loss: 0.7222 - val_accuracy: 0.7848
Epoch 30/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4437 - accuracy: 0.8460 - val_loss: 0.8287 - val_accuracy: 0.7822
Epoch 31/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4325 - accuracy: 0.8506 - val_loss: 0.7304 - val_accuracy: 0.7853
Epoch 32/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4251 - accuracy: 0.8546 - val_loss: 0.7716 - val_accuracy: 0.7886
Epoch 33/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4308 - accuracy: 0.8516 - val_loss: 0.7439 - val_accuracy: 0.7851
Epoch 34/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4345 - accuracy: 0.8526 - val_loss: 0.7781 - val_accuracy: 0.7844
Epoch 35/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4201 - accuracy: 0.8561 - val_loss: 0.7425 - val_accuracy: 0.7840
Epoch 36/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4140 - accuracy: 0.8570 - val_loss: 0.7977 - val_accuracy: 0.7788
Epoch 37/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4161 - accuracy: 0.8562 - val_loss: 0.7074 - val_accuracy: 0.7894
Epoch 38/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4184 - accuracy: 0.8566 - val_loss: 0.7373 - val_accuracy: 0.7850
Epoch 39/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4094 - accuracy: 0.8615 - val_loss: 0.7883 - val_accuracy: 0.7839
Epoch 40/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4142 - accuracy: 0.8580 - val_loss: 0.7342 - val_accuracy: 0.7864
Epoch 41/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4070 - accuracy: 0.8602 - val_loss: 0.7364 - val_accuracy: 0.7761
Epoch 42/50
1563/1563 [==============================] - 8s 5ms/step - loss: 0.4124 - accuracy: 0.8582 - val_loss: 0.7506 - val_accuracy: 0.7888
Epoch 43/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.3905 - accuracy: 0.8659 - val_loss: 0.7750 - val_accuracy: 0.7768
Epoch 44/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.3987 - accuracy: 0.8629 - val_loss: 0.7926 - val_accuracy: 0.7799
Epoch 45/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4030 - accuracy: 0.8623 - val_loss: 0.7940 - val_accuracy: 0.7783
Epoch 46/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.4048 - accuracy: 0.8613 - val_loss: 0.7456 - val_accuracy: 0.7777
Epoch 47/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.3923 - accuracy: 0.8668 - val_loss: 0.7727 - val_accuracy: 0.7899
Epoch 48/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.3944 - accuracy: 0.8657 - val_loss: 0.8327 - val_accuracy: 0.7755
Epoch 49/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.3923 - accuracy: 0.8685 - val_loss: 0.7215 - val_accuracy: 0.7852
Epoch 50/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.3887 - accuracy: 0.8693 - val_loss: 0.8165 - val_accuracy: 0.7794
In [ ]:
# 評価
score = model.evaluate(x_test, y_test, verbose=1)
print("正解率=", score[1], "loss=", score[0])
313/313 [==============================] - 1s 3ms/step - loss: 0.8165 - accuracy: 0.7794
正解率= 0.7793999910354614 loss= 0.816506564617157
In [ ]:
# 学習推移の可視化

# Accuracy
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title("Accuracy")
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [ ]:
# loss
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title("Loss")
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [ ]:
model.save_weights('/content/drive/My Drive/画像認識/DL_data/cifar10-weight.h5')
In [ ]:
x_test[0].shape
Out[ ]:
(32, 32, 3)

上記モデルによる写真の予測

In [ ]:
import cv2
import numpy as np
In [ ]:
labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
im_size = 32 * 32 * 3
In [ ]:
# モデルデータ読み込み
model.load_weights('/content/drive/My Drive/画像認識/DL_data/cifar10-weight.h5')
In [ ]:
# openCVで画像読み込み
im = cv2.imread('/content/drive/My Drive/画像認識/DL_data/cat.jpg')
# 色空間を変換して、リサイズ
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
im = cv2.resize(im, (32, 32))
plt.imshow(im)
plt.show()
In [ ]:
# MLPで学習した画像データに合わせる
im = im.astype('float32') / 255

# 予測
r = model.predict(np.array([im]), batch_size=32, verbose=1)
res = r[0]
1/1 [==============================] - 0s 1ms/step
In [ ]:
# 結果
for i, acc in enumerate(res):
    print(labels[i], '=', int(acc * 100))
print('---')
print('予測した結果 = ', labels[res.argmax()])
airplane = 7
automobile = 0
bird = 20
cat = 30
deer = 3
dog = 13
frog = 13
horse = 1
ship = 0
truck = 7
---
予測した結果 =  cat
In [ ]: