Введение:
Коронавирусная болезнь 2019 года (COVID-19) разразилась в конце 2019 года и в 2021 году продолжает наносить ущерб миллионам граждан и предприятий.
Пока мир адаптируется к пандемии и готовится вернуться к нормальной жизни, волну беспокойства среди всех лиц, особенно среди тех, кто планирует возобновить личную деятельность.
Несколько исследований показали, что ношение лицевой маски значительно снижает риск передачи вируса, а также обеспечивает чувство безопасности.
Однако вручную отслеживать реализацию стратегии невозможно. Главное здесь — технология. Я намерен представить систему, основанную на глубоком обучении, которая может обнаруживать случаи неправильного использования лицевых масок.
Система построена на двухэтапной архитектуре сверточной нейронной сети (CNN), которая может обнаруживать лица как в масках, так и без масок и может быть интегрирован с предустановленными камерами видеонаблюдения. Это поможет в отслеживании нарушений техники безопасности, пропаганде использования масок и создании безопасных условий труда.
Ученые и технологи добились быстрых прорывов в науке и технике, позволив нам сделать то, что было невозможно еще несколько десятилетий назад. Технологии машинного обучения и искусственного интеллекта упростили нашу жизнь и предоставили решения для множества сложных вопросов в самых разных учебных дисциплинах. Когда дело доходит до задач визуального восприятия, современные алгоритмы компьютерного зрения приближаются к производительности человеческого уровня. Компьютерное зрение оказалось прорывным компонентом современных технологий, от категоризации изображений до видеоаналитики.
Ни для кого не секрет, что технологии оказались спасательным кругом для многих в борьбе с новым коронавирусным заболеванием (COVID-19). Технологии изменили наши привычные рабочие привычки, и работа из дома стала частью нашей повседневной жизни.
В некоторых случаях компании или отрасли может быть сложно адаптироваться к этому новому стандарту. Люди по-прежнему опасаются возвращаться к работе, даже несмотря на то, что эпидемия постепенно стихает, и такие отрасли готовы возобновить личный труд. Подсчитано, что 65% сотрудников сейчас боятся возвращаться на работу. Согласно многим исследованиям, маски для лица снижают вероятность передачи вируса и дают ощущение защиты.
Однако личное соблюдение такого правила на крупных сайтах и отслеживание любых нарушений невозможно. В компьютерном зрении есть лучшее решение.
Давайте рассмотрим систему, которая может идентифицировать наличие лицевых масок на изображениях и видео, используя сочетание классификации изображений, идентификации объектов, отслеживания объектов и анализа видео.
Прием данных и добавление меток:
Импорт необходимых библиотек
import os
import cv2
import numpy as np
from os import listdir
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
import warnings
warnings.filterwarnings('ignore')
Понимание содержимого данных изображения:
img = image.load_img("dataset-face-mask/dataset/train_validate/masked/0_0_0 copy 9.jpg") plt.imshow(img)
<matplotlib.image.AxesImage at 0x1409ba7edf0>
Печать содержимого матрицы того же изображения
cv2.imread("dataset-face-mask/dataset/train_validate/masked/0_0_0 copy 9.jpg").shape
(514, 319, 3)
## Checking how mayn images are there in the test dir for mask and without mask category print("The data distribution is as follows:\n") print("Training Data:") print("With mask-",len(os.listdir('dataset-face-mask/dataset/train_validate/masked'))) print("Wihtout mask-", len(os.listdir('dataset-face-mask/dataset/train_validate/unmasked'))) print("\n") print("Testing Data: ") print("With mask-",len(os.listdir('dataset-face-mask/dataset/test/masked'))) print("Wihtout mask-", len(os.listdir('dataset-face-mask/dataset/test/unmasked')))
The data distribution is as follows: Training Data: With mask- 887 Wihtout mask- 840 Testing Data: With mask- 160 Wihtout mask- 160
Загрузка данных с помощью ImageDataGenerator
Размер изображения = 64x64 Один и тот же генератор данных используется как для обучения, так и для проверки.
train_validate_path = "dataset-face-mask/dataset/train_validate" #Creating a DataGenerator datagen = ImageDataGenerator(rescale=1.0/255,shear_range=0.2,zoom_range=0.2,validation_split=0.2,horizontal_flip=True) #generating training data train_data_generator = datagen.flow_from_directory(train_validate_path, batch_size=32, shuffle=False, target_size=(64,64),subset='training') #generating validation data validation_data_generator = datagen.flow_from_directory(train_validate_path,target_size=(64,64), batch_size=32,shuffle=False,subset='validation')
Found 1382 images belonging to 2 classes. Found 345 images belonging to 2 classes.
Загрузка тестовых данных
test_imgs_dir = "dataset-face-mask/dataset/test" test_datagen = ImageDataGenerator(rescale=1.0/255,shear_range=0.2,zoom_range=0.2,horizontal_flip=True) test_data_generator = test_datagen.flow_from_directory(test_imgs_dir, batch_size=32, shuffle=False,target_size=(64,64))
Found 320 images belonging to 2 classes.
Построение модели в соответствии с заданной спецификацией:
model = tf.keras.models.Sequential([ Conv2D(32,(3,3), activation='relu', input_shape=(64, 64, 3)), MaxPooling2D(2,2), Conv2D(64,(3,3), activation='relu'), Conv2D(128,(3,3), padding='same', activation='relu'), MaxPooling2D(2,2), Flatten(), Dense(256, activation='relu'), Dense(2, activation='softmax')]) #Compiling the model using adam optimizer, loss of binary_crossentropy model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
#printing the model summary model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 62, 62, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 31, 31, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 29, 29, 64) 18496 _________________________________________________________________ conv2d_2 (Conv2D) (None, 29, 29, 128) 73856 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 14, 14, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 25088) 0 _________________________________________________________________ dense (Dense) (None, 256) 6422784 _________________________________________________________________ dense_1 (Dense) (None, 2) 514 ================================================================= Total params: 6,516,546 Trainable params: 6,516,546 Non-trainable params: 0 _________________________________________________________________
Обучение модели
#defining the variables needed and running the model for 70 epochs val_loss = 0 epochs = 70 batch_size=32 history = model.fit(train_data_generator, steps_per_epoch = train_data_generator.samples / batch_size, epochs=epochs, validation_steps = validation_data_generator.samples / batch_size, validation_data=validation_data_generator)
Epoch 1/70 43/43 [==============================] - 19s 433ms/step - loss: 1.1941 - accuracy: 0.4578 - val_loss: 0.8657 - val_accuracy: 0.5130 Epoch 2/70 43/43 [==============================] - 18s 407ms/step - loss: 0.8044 - accuracy: 0.6393 - val_loss: 0.6663 - val_accuracy: 0.6174 Epoch 3/70 43/43 [==============================] - 18s 422ms/step - loss: 0.7397 - accuracy: 0.5458 - val_loss: 0.6178 - val_accuracy: 0.6551 Epoch 4/70 43/43 [==============================] - 18s 412ms/step - loss: 0.4866 - accuracy: 0.7623 - val_loss: 0.2518 - val_accuracy: 0.9333 Epoch 5/70 43/43 [==============================] - 18s 414ms/step - loss: 0.2317 - accuracy: 0.9235 - val_loss: 0.1702 - val_accuracy: 0.9362 Epoch 6/70 43/43 [==============================] - 18s 408ms/step - loss: 0.2391 - accuracy: 0.9135 - val_loss: 0.1581 - val_accuracy: 0.9275 Epoch 7/70 43/43 [==============================] - 18s 414ms/step - loss: 0.1288 - accuracy: 0.9495 - val_loss: 0.1441 - val_accuracy: 0.9362 Epoch 8/70 43/43 [==============================] - 18s 421ms/step - loss: 0.2028 - accuracy: 0.9290 - val_loss: 0.1337 - val_accuracy: 0.9536 Epoch 9/70 43/43 [==============================] - 18s 417ms/step - loss: 0.1357 - accuracy: 0.9518 - val_loss: 0.1178 - val_accuracy: 0.9768 Epoch 10/70 43/43 [==============================] - 18s 413ms/step - loss: 0.2012 - accuracy: 0.9227 - val_loss: 0.1123 - val_accuracy: 0.9652 Epoch 11/70 43/43 [==============================] - 18s 421ms/step - loss: 0.1266 - accuracy: 0.9568 - val_loss: 0.1192 - val_accuracy: 0.9623 Epoch 12/70 43/43 [==============================] - 18s 411ms/step - loss: 0.1122 - accuracy: 0.9626 - val_loss: 0.1390 - val_accuracy: 0.9536 Epoch 13/70 43/43 [==============================] - 18s 410ms/step - loss: 0.2275 - accuracy: 0.9166 - val_loss: 0.1072 - val_accuracy: 0.9594 Epoch 14/70 43/43 [==============================] - 18s 413ms/step - loss: 0.1099 - accuracy: 0.9632 - val_loss: 0.0900 - val_accuracy: 0.9768 Epoch 15/70 43/43 [==============================] - 19s 429ms/step - loss: 0.0785 - accuracy: 0.9692 - val_loss: 0.1200 - val_accuracy: 0.9449 Epoch 16/70 43/43 [==============================] - 18s 413ms/step - loss: 0.0971 - accuracy: 0.9627 - val_loss: 0.0766 - val_accuracy: 0.9884 Epoch 17/70 43/43 [==============================] - 19s 431ms/step - loss: 0.1313 - accuracy: 0.9530 - val_loss: 0.1398 - val_accuracy: 0.9623 Epoch 18/70 43/43 [==============================] - 18s 421ms/step - loss: 0.1884 - accuracy: 0.9369 - val_loss: 0.1149 - val_accuracy: 0.9739 Epoch 19/70 43/43 [==============================] - 20s 461ms/step - loss: 0.1153 - accuracy: 0.9632 - val_loss: 0.1055 - val_accuracy: 0.9652 Epoch 20/70 43/43 [==============================] - 18s 415ms/step - loss: 0.1083 - accuracy: 0.9665 - val_loss: 0.0870 - val_accuracy: 0.9739 Epoch 21/70 43/43 [==============================] - 18s 414ms/step - loss: 0.0887 - accuracy: 0.9703 - val_loss: 0.1359 - val_accuracy: 0.9333 Epoch 22/70 43/43 [==============================] - 18s 415ms/step - loss: 0.0758 - accuracy: 0.9733 - val_loss: 0.1032 - val_accuracy: 0.9681 Epoch 23/70 43/43 [==============================] - 18s 417ms/step - loss: 0.1078 - accuracy: 0.9644 - val_loss: 0.1453 - val_accuracy: 0.9536 Epoch 24/70 43/43 [==============================] - 21s 496ms/step - loss: 0.0766 - accuracy: 0.9750 - val_loss: 0.1694 - val_accuracy: 0.9478 Epoch 25/70 43/43 [==============================] - 20s 470ms/step - loss: 0.0628 - accuracy: 0.9768 - val_loss: 0.0868 - val_accuracy: 0.9710 Epoch 26/70 43/43 [==============================] - 19s 448ms/step - loss: 0.0788 - accuracy: 0.9764 - val_loss: 0.0964 - val_accuracy: 0.9739 Epoch 27/70 43/43 [==============================] - 18s 415ms/step - loss: 0.0626 - accuracy: 0.9772 - val_loss: 0.0947 - val_accuracy: 0.9739 Epoch 28/70 43/43 [==============================] - 19s 447ms/step - loss: 0.0740 - accuracy: 0.9718 - val_loss: 0.1797 - val_accuracy: 0.9536 Epoch 29/70 43/43 [==============================] - 20s 468ms/step - loss: 0.1205 - accuracy: 0.9647 - val_loss: 0.1063 - val_accuracy: 0.9768 Epoch 30/70 43/43 [==============================] - 19s 428ms/step - loss: 0.0421 - accuracy: 0.9850 - val_loss: 0.0903 - val_accuracy: 0.9739 Epoch 31/70 43/43 [==============================] - 18s 418ms/step - loss: 0.0667 - accuracy: 0.9778 - val_loss: 0.0954 - val_accuracy: 0.9768 Epoch 32/70 43/43 [==============================] - 19s 444ms/step - loss: 0.0437 - accuracy: 0.9817 - val_loss: 0.1597 - val_accuracy: 0.9652 Epoch 33/70 43/43 [==============================] - 18s 418ms/step - loss: 0.1085 - accuracy: 0.9627 - val_loss: 0.1078 - val_accuracy: 0.9739 Epoch 34/70 43/43 [==============================] - 18s 428ms/step - loss: 0.0588 - accuracy: 0.9815 - val_loss: 0.1050 - val_accuracy: 0.9710 Epoch 35/70 43/43 [==============================] - 18s 416ms/step - loss: 0.0523 - accuracy: 0.9828 - val_loss: 0.0923 - val_accuracy: 0.9768 Epoch 36/70 43/43 [==============================] - 18s 420ms/step - loss: 0.0382 - accuracy: 0.9852 - val_loss: 0.1181 - val_accuracy: 0.9739 Epoch 37/70 43/43 [==============================] - 18s 424ms/step - loss: 0.0374 - accuracy: 0.9887 - val_loss: 0.0987 - val_accuracy: 0.9768 Epoch 38/70 43/43 [==============================] - 18s 417ms/step - loss: 0.0542 - accuracy: 0.9830 - val_loss: 0.0959 - val_accuracy: 0.9797 Epoch 39/70 43/43 [==============================] - 18s 410ms/step - loss: 0.0364 - accuracy: 0.9849 - val_loss: 0.1428 - val_accuracy: 0.9739 Epoch 40/70 43/43 [==============================] - 18s 416ms/step - loss: 0.0423 - accuracy: 0.9815 - val_loss: 0.1006 - val_accuracy: 0.9710 Epoch 41/70 43/43 [==============================] - 18s 411ms/step - loss: 0.0402 - accuracy: 0.9881 - val_loss: 0.1212 - val_accuracy: 0.9768 Epoch 42/70 43/43 [==============================] - 18s 419ms/step - loss: 0.0439 - accuracy: 0.9837 - val_loss: 0.1199 - val_accuracy: 0.9739 Epoch 43/70 43/43 [==============================] - 18s 417ms/step - loss: 0.0266 - accuracy: 0.9898 - val_loss: 0.1204 - val_accuracy: 0.9623 Epoch 44/70 43/43 [==============================] - 18s 413ms/step - loss: 0.0581 - accuracy: 0.9735 - val_loss: 0.1041 - val_accuracy: 0.9768 Epoch 45/70 43/43 [==============================] - 18s 411ms/step - loss: 0.0582 - accuracy: 0.9796 - val_loss: 0.1148 - val_accuracy: 0.9768 Epoch 46/70 43/43 [==============================] - 18s 408ms/step - loss: 0.0264 - accuracy: 0.9916 - val_loss: 0.1620 - val_accuracy: 0.9739 Epoch 47/70 43/43 [==============================] - 18s 412ms/step - loss: 0.0346 - accuracy: 0.9914 - val_loss: 0.1793 - val_accuracy: 0.9652 Epoch 48/70 43/43 [==============================] - 18s 415ms/step - loss: 0.0321 - accuracy: 0.9897 - val_loss: 0.1113 - val_accuracy: 0.9739 Epoch 49/70 43/43 [==============================] - 18s 420ms/step - loss: 0.0182 - accuracy: 0.9913 - val_loss: 0.1286 - val_accuracy: 0.9710 Epoch 50/70 43/43 [==============================] - 19s 432ms/step - loss: 0.0200 - accuracy: 0.9916 - val_loss: 0.1272 - val_accuracy: 0.9768 Epoch 51/70 43/43 [==============================] - 18s 425ms/step - loss: 0.0260 - accuracy: 0.9916 - val_loss: 0.1453 - val_accuracy: 0.9739 Epoch 52/70 43/43 [==============================] - 18s 412ms/step - loss: 0.0320 - accuracy: 0.9899 - val_loss: 0.1707 - val_accuracy: 0.9710 Epoch 53/70 43/43 [==============================] - 18s 414ms/step - loss: 0.0258 - accuracy: 0.9930 - val_loss: 0.1241 - val_accuracy: 0.9710 Epoch 54/70 43/43 [==============================] - 18s 411ms/step - loss: 0.0115 - accuracy: 0.9972 - val_loss: 0.1293 - val_accuracy: 0.9797 Epoch 55/70 43/43 [==============================] - 18s 412ms/step - loss: 0.0276 - accuracy: 0.9906 - val_loss: 0.1415 - val_accuracy: 0.9768 Epoch 56/70 43/43 [==============================] - 18s 416ms/step - loss: 0.0287 - accuracy: 0.9917 - val_loss: 0.1359 - val_accuracy: 0.9710 Epoch 57/70 43/43 [==============================] - 18s 416ms/step - loss: 0.0242 - accuracy: 0.9912 - val_loss: 0.1596 - val_accuracy: 0.9710 Epoch 58/70 43/43 [==============================] - 18s 405ms/step - loss: 0.0431 - accuracy: 0.9840 - val_loss: 0.1628 - val_accuracy: 0.9739 Epoch 59/70 43/43 [==============================] - 18s 410ms/step - loss: 0.0229 - accuracy: 0.9890 - val_loss: 0.1480 - val_accuracy: 0.9681 Epoch 60/70 43/43 [==============================] - 18s 413ms/step - loss: 0.0176 - accuracy: 0.9959 - val_loss: 0.1621 - val_accuracy: 0.9710 Epoch 61/70 43/43 [==============================] - 18s 406ms/step - loss: 0.0138 - accuracy: 0.9946 - val_loss: 0.1272 - val_accuracy: 0.9797 Epoch 62/70 43/43 [==============================] - 18s 408ms/step - loss: 0.0813 - accuracy: 0.9769 - val_loss: 0.1138 - val_accuracy: 0.9710 Epoch 63/70 43/43 [==============================] - 18s 410ms/step - loss: 0.0286 - accuracy: 0.9920 - val_loss: 0.2186 - val_accuracy: 0.9681 Epoch 64/70 43/43 [==============================] - 18s 406ms/step - loss: 0.0201 - accuracy: 0.9899 - val_loss: 0.1206 - val_accuracy: 0.9797 Epoch 65/70 43/43 [==============================] - 18s 407ms/step - loss: 0.0113 - accuracy: 0.9949 - val_loss: 0.1714 - val_accuracy: 0.9710 Epoch 66/70 43/43 [==============================] - 18s 409ms/step - loss: 0.0052 - accuracy: 0.9983 - val_loss: 0.1530 - val_accuracy: 0.9768 Epoch 67/70 43/43 [==============================] - 18s 419ms/step - loss: 0.0057 - accuracy: 0.9988 - val_loss: 0.1643 - val_accuracy: 0.9739 Epoch 68/70 43/43 [==============================] - 18s 413ms/step - loss: 0.0074 - accuracy: 0.9976 - val_loss: 0.1669 - val_accuracy: 0.9739 Epoch 69/70 43/43 [==============================] - 19s 433ms/step - loss: 0.0037 - accuracy: 0.9994 - val_loss: 0.1631 - val_accuracy: 0.9739 Epoch 70/70 43/43 [==============================] - 19s 439ms/step - loss: 0.0041 - accuracy: 0.9973 - val_loss: 0.1580 - val_accuracy: 0.9739
## Printing the final training loss and accuracy obtained after the model has been trained print('Final training loss \t', history.history['loss'][-1]*100) print('Final training accuracy ', history.history['accuracy'][-1]*100)
Final training loss 0.5243149120360613 Final training accuracy 99.7829258441925
loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, color='red', label='Training loss') plt.plot(epochs, val_loss, color='green', label='Validation loss') plt.title('Training and Validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() # plotting training and validation accuracy acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] plt.plot(epochs, acc, color='red', label='Training acc') plt.plot(epochs, val_acc, color='green', label='Validation acc') plt.title('Training and Validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show()
## Printing the model accuracy from the test_imgs_generator test_data_generator.reset() # resetting generator model.evaluate(test_data_generator,batch_size=32, verbose=1)
10/10 [==============================] - 2s 143ms/step - loss: 1.2123 - accuracy: 0.7656 [1.2122743129730225, 0.765625]
Путаница Марикс
from sklearn.metrics import classification_report, confusion_matrix num_of_test_samples = 320 batch_size = 32 test_data_generator.reset() Y_predict = model.predict(test_data_generator, steps= num_of_test_samples // batch_size) y_predict = np.argmax(Y_predict, axis=1) print('Confusion Matrix: ') print(confusion_matrix(test_data_generator.classes, y_predict))
Confusion Matrix: [[114 46] [ 24 136]]
from sklearn.metrics import confusion_matrix import seaborn as sns plt.figure(figsize=(10,8)) sns.heatmap(confusion_matrix(test_data_generator.classes, y_predict), annot=True, fmt="d");
Отчет о классификации
## Printing the classification Report score with Mask and Unmasked as two target names here ## f1-score is 0.82 print('Classification Report') target_names = ['without_mask','with_mask'] model_cr = classification_report(test_data_generator.classes, y_predict, target_names=target_names,output_dict=True) print(model_cr)
Classification Report {'without_mask': {'precision': 0.8260869565217391, 'recall': 0.7125, 'f1-score': 0.7651006711409397, 'support': 160}, 'with_mask': {'precision': 0.7472527472527473, 'recall': 0.85, 'f1-score': 0.7953216374269005, 'support': 160}, 'accuracy': 0.78125, 'macro avg': {'precision': 0.7866698518872433, 'recall': 0.78125, 'f1-score': 0.7802111542839201, 'support': 320}, 'weighted avg': {'precision': 0.7866698518872431, 'recall': 0.78125, 'f1-score': 0.7802111542839201, 'support': 320}}
import pandas as pd default = pd.DataFrame(model_cr).transpose() default['model'] = 'default' default
Сюжет РПЦ
from sklearn.metrics import roc_curve, auc, roc_auc_score import matplotlib.pyplot as plt
test_data_generator.reset() # resetting generator y_pred = model.predict(test_data_generator, verbose = True) y_pred = np.argmax(y_pred, axis=1) fpr_keras, tpr_keras, thresholds_keras = roc_curve(test_data_generator.classes, y_pred) auc_keras = auc(fpr_keras, tpr_keras) plt.figure(1) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_keras, tpr_keras, label='area = {:.3f}'.format(auc_keras)) plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.legend(loc='best') plt.show()
10/10 [==============================] - 2s 137ms/step
Создание улучшенной модели
## Added the dropout of 0.3 in the architecture and added the sigmoid function improved_model = tf.keras.models.Sequential([ Conv2D(16, 3, activation='relu', input_shape=(64, 64, 3)), MaxPooling2D(2,2), Conv2D(32, 3, activation='relu'), MaxPooling2D(2,2), Conv2D(64, 3, activation='relu'), MaxPooling2D(2,2), Conv2D(128, 3, activation='relu'), MaxPooling2D(2,2), Conv2D(256, 3, padding='same', activation='relu'), MaxPooling2D(2,2), Flatten(), Dropout(0.3), Dense(256, activation='relu'), Dense(2, activation='sigmoid') # dense layer has a shape of 2 as we have only 2 classes ]) improved_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 62, 62, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 31, 31, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 29, 29, 64) 18496 _________________________________________________________________ conv2d_2 (Conv2D) (None, 29, 29, 128) 73856 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 14, 14, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 25088) 0 _________________________________________________________________ dense (Dense) (None, 256) 6422784 _________________________________________________________________ dense_1 (Dense) (None, 2) 514 ================================================================= Total params: 6,516,546 Trainable params: 6,516,546 Non-trainable params: 0 _________________________________________________________________
#defining the variables needed val_loss = 0 epochs = 30 batch_size=32 history_im = improved_model.fit(train_data_generator, steps_per_epoch = train_data_generator.samples / batch_size, epochs=epochs, validation_steps = validation_data_generator.samples / batch_size, validation_data=validation_data_generator)
Epoch 1/30 43/43 [==============================] - 10s 211ms/step - loss: 0.7022 - accuracy: 0.3953 - val_loss: 0.6862 - val_accuracy: 0.8783 Epoch 2/30 43/43 [==============================] - 9s 218ms/step - loss: 0.6839 - accuracy: 0.6777 - val_loss: 0.5881 - val_accuracy: 0.7159 Epoch 3/30 43/43 [==============================] - 9s 202ms/step - loss: 0.5998 - accuracy: 0.6759 - val_loss: 0.3204 - val_accuracy: 0.8435 Epoch 4/30 43/43 [==============================] - 9s 213ms/step - loss: 0.3325 - accuracy: 0.8463 - val_loss: 0.3112 - val_accuracy: 0.9159 Epoch 5/30 43/43 [==============================] - 9s 212ms/step - loss: 0.1643 - accuracy: 0.9515 - val_loss: 0.2323 - val_accuracy: 0.9304 Epoch 6/30 43/43 [==============================] - 9s 219ms/step - loss: 0.1422 - accuracy: 0.9559 - val_loss: 0.2011 - val_accuracy: 0.9478 Epoch 7/30 43/43 [==============================] - 10s 231ms/step - loss: 0.1504 - accuracy: 0.9465 - val_loss: 0.1249 - val_accuracy: 0.9623 Epoch 8/30 43/43 [==============================] - 9s 203ms/step - loss: 0.1745 - accuracy: 0.9234 - val_loss: 0.1153 - val_accuracy: 0.9623 Epoch 9/30 43/43 [==============================] - 9s 212ms/step - loss: 0.1448 - accuracy: 0.9549 - val_loss: 0.0621 - val_accuracy: 0.9826 Epoch 10/30 43/43 [==============================] - 9s 196ms/step - loss: 0.1059 - accuracy: 0.9691 - val_loss: 0.0762 - val_accuracy: 0.9739 Epoch 11/30 43/43 [==============================] - 9s 211ms/step - loss: 0.0785 - accuracy: 0.9762 - val_loss: 0.0743 - val_accuracy: 0.9797 Epoch 12/30 43/43 [==============================] - 10s 241ms/step - loss: 0.1039 - accuracy: 0.9695 - val_loss: 0.0851 - val_accuracy: 0.9739 Epoch 13/30 43/43 [==============================] - 10s 241ms/step - loss: 0.1412 - accuracy: 0.9589 - val_loss: 0.0638 - val_accuracy: 0.9826 Epoch 14/30 43/43 [==============================] - 10s 225ms/step - loss: 0.0859 - accuracy: 0.9667 - val_loss: 0.1436 - val_accuracy: 0.9710 Epoch 15/30 43/43 [==============================] - 9s 209ms/step - loss: 0.1510 - accuracy: 0.9662 - val_loss: 0.0629 - val_accuracy: 0.9826 Epoch 16/30 43/43 [==============================] - 9s 214ms/step - loss: 0.0522 - accuracy: 0.9827 - val_loss: 0.0647 - val_accuracy: 0.9768 Epoch 17/30 43/43 [==============================] - 9s 211ms/step - loss: 0.0848 - accuracy: 0.9683 - val_loss: 0.0862 - val_accuracy: 0.9739 Epoch 18/30 43/43 [==============================] - 9s 209ms/step - loss: 0.1128 - accuracy: 0.9637 - val_loss: 0.1181 - val_accuracy: 0.9710 Epoch 19/30 43/43 [==============================] - 10s 222ms/step - loss: 0.0623 - accuracy: 0.9770 - val_loss: 0.0586 - val_accuracy: 0.9855 Epoch 20/30 43/43 [==============================] - 9s 208ms/step - loss: 0.0611 - accuracy: 0.9825 - val_loss: 0.0691 - val_accuracy: 0.9768 Epoch 21/30 43/43 [==============================] - 9s 214ms/step - loss: 0.0459 - accuracy: 0.9803 - val_loss: 0.0645 - val_accuracy: 0.9739 Epoch 22/30 43/43 [==============================] - 9s 210ms/step - loss: 0.0534 - accuracy: 0.9792 - val_loss: 0.0626 - val_accuracy: 0.9797 Epoch 23/30 43/43 [==============================] - 9s 211ms/step - loss: 0.0598 - accuracy: 0.9765 - val_loss: 0.0693 - val_accuracy: 0.9826 Epoch 24/30 43/43 [==============================] - 10s 213ms/step - loss: 0.0530 - accuracy: 0.9810 - val_loss: 0.0570 - val_accuracy: 0.9855 Epoch 25/30 43/43 [==============================] - 9s 219ms/step - loss: 0.0374 - accuracy: 0.9839 - val_loss: 0.0916 - val_accuracy: 0.9739 Epoch 26/30 43/43 [==============================] - 10s 219ms/step - loss: 0.0701 - accuracy: 0.9836 - val_loss: 0.0728 - val_accuracy: 0.9710 Epoch 27/30 43/43 [==============================] - 11s 254ms/step - loss: 0.0380 - accuracy: 0.9870 - val_loss: 0.0736 - val_accuracy: 0.9739 Epoch 28/30 43/43 [==============================] - 13s 312ms/step - loss: 0.0546 - accuracy: 0.9791 - val_loss: 0.0606 - val_accuracy: 0.9768 Epoch 29/30 43/43 [==============================] - 12s 266ms/step - loss: 0.0292 - accuracy: 0.9911 - val_loss: 0.0551 - val_accuracy: 0.9826 Epoch 30/30 43/43 [==============================] - 9s 218ms/step - loss: 0.0766 - accuracy: 0.9714 - val_loss: 0.0603 - val_accuracy: 0.9855
print('Final training loss \t', history_im.history['loss'][-1]*100) print('Final training accuracy ', history_im.history['accuracy'][-1]*100)
Final training loss 9.133630990982056 Final training accuracy 96.59913182258606
loss = history_im.history['loss'] val_loss = history_im.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, color='red', label='Training loss') plt.plot(epochs, val_loss, color='green', label='Validation loss') plt.title('Training and Validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() # plotting training and validation accuracy acc = history_im.history['accuracy'] val_acc = history_im.history['val_accuracy'] plt.plot(epochs, acc, color='red', label='Training acc') plt.plot(epochs, val_acc, color='green', label='Validation acc') plt.title('Training and Validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show()
Матрица путаницы
from sklearn.metrics import classification_report, confusion_matrix num_of_test_samples = 320 batch_size = 32 test_data_generator.reset() Y_predict_im = improved_model.predict(test_data_generator, steps= num_of_test_samples // batch_size) y_predict_im = np.argmax(Y_predict_im, axis=1) print('Confusion Matrix: ') print(confusion_matrix(test_data_generator.classes, y_predict_im))
Confusion Matrix: [[142 18] [ 30 130]]
## Evaluating the model on the test_imgs_generator data wiht batch size = 32 test_data_generator.reset() # resetting generator improved_model.evaluate(test_data_generator,batch_size=32, verbose=1)
10/10 [==============================] - 1s 110ms/step - loss: 0.6176 - accuracy: 0.8344 [0.6176078915596008, 0.8343750238418579]
Отчет о классификации
print('Classification Report') target_names = ['Mask','Unmasked'] im_model_cr = classification_report(test_data_generator.classes, y_predict_im, target_names=target_names, output_dict=True) print(im_model_cr)
Classification Report {'Mask': {'precision': 0.8255813953488372, 'recall': 0.8875, 'f1-score': 0.8554216867469879, 'support': 160}, 'Unmasked': {'precision': 0.8783783783783784, 'recall': 0.8125, 'f1-score': 0.8441558441558441, 'support': 160}, 'accuracy': 0.85, 'macro avg': {'precision': 0.8519798868636078, 'recall': 0.85, 'f1-score': 0.849788765451416, 'support': 320}, 'weighted avg': {'precision': 0.8519798868636078, 'recall': 0.85, 'f1-score': 0.849788765451416, 'support': 320}}
import pandas as pd improved = pd.DataFrame(model_cr).transpose() improved['model'] = 'improved' improved
Сюжет РПЦ
test_data_generator.reset() # resetting generator y_pred = improved_model.predict(test_data_generator, verbose = True) y_pred = np.argmax(y_pred, axis=1) fpr_keras, tpr_keras, thresholds_keras = roc_curve(test_data_generator.classes, y_pred) auc_keras = auc(fpr_keras, tpr_keras) plt.figure(1) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_keras, tpr_keras, label='area = {:.3f}'.format(auc_keras)) plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.legend(loc='best') plt.show()
10/10 [==============================] - 1s 129ms/step
import pandas as pd import seaborn as sns precision = [default.at['without_mask','precision'] ,improved.at['without_mask','precision']] recall = [default.at['without_mask','recall'] ,improved.at['without_mask','recall']] df_plot = pd.DataFrame([recall,precision]) df_plot.columns = ['default','improved'] df_plot.index=['recall','precision'] print("Comparison without mask") df_plot.plot(kind='bar',stacked=True, title='Comparison without mask',figsize=(10,10));
Comparison without mask
import pandas as pd import seaborn as sns precision = [default.at['with_mask','precision'] ,improved.at['with_mask','precision']] recall = [default.at['with_mask','recall'] ,improved.at['with_mask','recall']] df_plot = pd.DataFrame([recall,precision]) df_plot.columns = ['default','improved'] df_plot.index=['recall','precision'] print("Comparison with mask") df_plot.plot(kind='bar',stacked=True, title='Comparison with mask',figsize=(10,10));
Comparison with mask
Вот ссылка на фрагмент кода и используемый набор данных.