Tensorflow - Using the Model

Dec 29 2019

Picture here

This is all taken from sentdex's excellent Tensorflow tutorial series. Check it all out here:
https://www.youtube.com/playlist?list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN\


Now we can set up a file that uses our created model to predict an output when given some input.
I ended up going with a model that uses 3 Conv2D layers with 96 nodes per layer. I also changed the amount of epochs to 7 after some trial and error, this seemed like an ok trade off between acc and val_acc before things started to overfit. The val_acc and val_loss aren't great but not terrible after the amount of tests I ran.
The model is found below:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
import pickle
import time
import numpy as np

x = np.asarray(pickle.load(open("x.pickle", "rb")))
y = np.asarray(pickle.load(open("y.pickle", "rb")))

x = x/255.0

dense_layers = [0]
layer_sizes = [96]
conv_layers = [3]
dense_layer_size = 512

for dense_layer in dense_layers:
for layer_size in layer_sizes:
for conv_layer in conv_layers:
NAME = f"96-layer_size-{int(time.time())}"
tensorboard = TensorBoard(log_dir=f"logs\\{NAME}")

model = Sequential()
model.add(Conv2D(layer_size, (3,3), input_shape=x.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

for l in range(conv_layer - 1):
model.add(Conv2D(layer_size, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten()) # This converts our 3D feature maps to 1D feature vectors

for l in range(dense_layer):
model.add(Dense(dense_layer_size))
model.add(Activation("relu"))
model.add(Dropout(0.2))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",
optimizer="adam",
metrics=["accuracy"])

model.fit(x, y, batch_size=32, epochs=7, validation_split=0.15, callbacks=[tensorboard]) # 7 epochs appears best for 0-dense-3-conv-96-nodes
# Dropout doesn't appear to make any difference with one 512 Dense layer added.

model.save("96x3-CNN.model")


Note the model.save("96x3-CNN.model"). We can now use this in a completely separate file to test an image:

import cv2
import tensorflow as tf

CATEGORIES = ['Dog', 'Cat']

def prepare(filepath):
IMG_SIZE = 50
img_array = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
img_array = img_array/255.0
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)

model = tf.keras.models.load_model("96x3-CNN.model")
prediction = model.predict([prepare("dog.jpg")])
print(CATEGORIES[int(round(prediction[0][0]))])


This outputs "Dog". The dog.jpg file is sitting in the same directory as all these other files. We are just using CATEGORIES here to output a readable string. The prediction variable will actually output a number between 0 and 1, which gives some indication of how certain the model is that the image is a cat or dog. I'm then using round and int to call the appropriate index in the CATEGORIES list.