Convolutional Neural Networks (CNNs) for Images

Keras Basics

2 min read

Published Nov 17 2025


11
0
0
0

KerasNeural NetworksPythonTensorFlow

CNNs are the foundation of modern computer vision.


They are used for:

  • Image classification
  • Object detection
  • Image segmentation
  • Face recognition
  • OCR
  • Medical imaging

In this section you’ll learn:

  1. What convolutional layers do (practically)
  2. How to load and preprocess image datasets
  3. How to build CNN architectures
  4. How to train, evaluate, and predict
  5. How to improve CNN performance

We’ll use CIFAR-10 — a standard small image dataset built into Keras.






Load the CIFAR-10 Dataset

CIFAR-10 contains:

  • 50,000 training images
  • 10,000 test images
  • 10 classes (airplane, car, bird, etc.)
  • Images are 32×32×3 (RGB)

Load dataset

from tensorflow.keras import datasets

(x_train, y_train), (x_test, y_test) = datasets.cifar10.load_data()

Inspect shapes

print(x_train.shape)
# (50000, 32, 32, 3)
print(y_train.shape)
# (50000, 1)

Note: labels have shape (batch, 1).






Preprocess the Data

Normalise pixel values (0–1)

x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0

No flattening — CNNs operate on multidimensional image tensors.






Build a Simple CNN

A typical small CNN has:

  • Conv → ReLU → Pool
  • Conv → ReLU → Pool
  • Flatten
  • Dense layers
  • Softmax output

Build model

from tensorflow import keras
from tensorflow.keras import layers

model = keras.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(32,32,3)),
    layers.MaxPooling2D((2,2)),

    layers.Conv2D(64, (3,3), activation='relu'),
    layers.MaxPooling2D((2,2)),

    layers.Conv2D(64, (3,3), activation='relu'),

    layers.Flatten(),
    layers.Dense(64, activation='relu'),
    layers.Dense(10, activation='softmax'),
])

Explanation:

  • Conv2D(32 filters) → learns low-level patterns
  • Pooling → reduces spatial size
  • Conv2D(64 filters) → learns more complex patterns
  • Dense layers → final classification





Compile the CNN

model.compile(
    optimizer="adam",
    loss="sparse_categorical_crossentropy",
    metrics=["accuracy"]
)

This is identical to earlier multi-class tasks, loss and metrics don’t change.






Train the CNN

history = model.fit(
    x_train, y_train,
    epochs=10,
    batch_size=64,
    validation_split=0.1
)

CNNs require more compute than MLPs but still train quickly on CIFAR-10.






Evaluate

test_loss, test_acc = model.evaluate(x_test, y_test)
print("Test accuracy:", test_acc)

Typical accuracy:

  • Simple CNN → 65–72% (We will build better ones later.)





Predicting Images

import numpy as np

pred = model.predict(x_test[:5])
pred_classes = np.argmax(pred, axis=1)
print(pred_classes)
print(y_test[:5].flatten())





Visualising Predictions

import matplotlib.pyplot as plt

plt.imshow(x_test[0])
plt.title(f"Predicted: {pred_classes[0]}, Actual: {y_test[0][0]}")
plt.show()





Improving CNNs (Recommended Add-Ons)

Most of the performance in CNNs comes from architectural choices.
Below are common techniques to improve results significantly.


A) Add Batch Normalisation

Helps training stability and speed.

layers.Conv2D(32, 3, activation='relu'),
layers.BatchNormalization(),

B) Add Dropout to Reduce Overfitting

layers.Dropout(0.3)

C) Use Data Augmentation

Helps generalisation:

data_augmentation = keras.Sequential([
    layers.RandomFlip("horizontal"),
    layers.RandomRotation(0.1),
])

Use it like:

model = keras.Sequential([
    data_augmentation,
    ...
])


D) Increase Model Depth

A deeper CNN performs better:

layers.Conv2D(128, 3, activation='relu')

E) Use GlobalAveragePooling Instead of Flatten

Modern CNNs often end like this:

layers.GlobalAveragePooling2D(),
layers.Dense(10, activation='softmax')

Improved CNN Example

model = keras.Sequential([
    data_augmentation,

    layers.Conv2D(32, 3, activation='relu', padding='same'),
    layers.BatchNormalization(),
    layers.Conv2D(32, 3, activation='relu', padding='same'),
    layers.BatchNormalization(),
    layers.MaxPooling2D(),
    layers.Dropout(0.25),

    layers.Conv2D(64, 3, activation='relu', padding='same'),
    layers.BatchNormalization(),
    layers.Conv2D(64, 3, activation='relu', padding='same'),
    layers.BatchNormalization(),
    layers.MaxPooling2D(),
    layers.Dropout(0.25),

    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(10, activation='softmax'),
])

This architecture can reach 80–85% accuracy.






CNN Training Tips

Use larger batch sizes (64 or 128) - Good for GPU training.


More epochs → better accuracy - 15–30 usually improves results.


Add regularisation if overfitting - Dropout, batch normalisation, data augmentation.


Use learning rate schedules

keras.callbacks.ReduceLROnPlateau()





Full Working Simple CNN Script

from tensorflow.keras import datasets
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np

# Load data
(x_train, y_train), (x_test, y_test) = datasets.cifar10.load_data()

# Normalise
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0

# Build model
model = keras.Sequential([
    layers.Conv2D(32, 3, activation='relu', input_shape=(32,32,3)),
    layers.MaxPooling2D(),
    layers.Conv2D(64, 3, activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(64, 3, activation='relu'),
    layers.Flatten(),
    layers.Dense(64, activation='relu'),
    layers.Dense(10, activation='softmax')
])

# Compile
model.compile(
    optimizer="adam",
    loss="sparse_categorical_crossentropy",
    metrics=["accuracy"]
)

# Train
model.fit(x_train, y_train, epochs=10, batch_size=64, validation_split=0.1)

# Evaluate
model.evaluate(x_test, y_test)

# Predict example
pred = model.predict(x_test[:1])
print("Predicted:", np.argmax(pred))
print("True:", y_test[0][0])

Products from our shop

Docker Cheat Sheet - Print at Home Designs

Docker Cheat Sheet - Print at Home Designs

Docker Cheat Sheet Mouse Mat

Docker Cheat Sheet Mouse Mat

Docker Cheat Sheet Travel Mug

Docker Cheat Sheet Travel Mug

Docker Cheat Sheet Mug

Docker Cheat Sheet Mug

Vim Cheat Sheet - Print at Home Designs

Vim Cheat Sheet - Print at Home Designs

Vim Cheat Sheet Mouse Mat

Vim Cheat Sheet Mouse Mat

Vim Cheat Sheet Travel Mug

Vim Cheat Sheet Travel Mug

Vim Cheat Sheet Mug

Vim Cheat Sheet Mug

SimpleSteps.guide branded Travel Mug

SimpleSteps.guide branded Travel Mug

Developer Excuse Javascript - Travel Mug

Developer Excuse Javascript - Travel Mug

Developer Excuse Javascript Embroidered T-Shirt - Dark

Developer Excuse Javascript Embroidered T-Shirt - Dark

Developer Excuse Javascript Embroidered T-Shirt - Light

Developer Excuse Javascript Embroidered T-Shirt - Light

Developer Excuse Javascript Mug - White

Developer Excuse Javascript Mug - White

Developer Excuse Javascript Mug - Black

Developer Excuse Javascript Mug - Black

SimpleSteps.guide branded stainless steel water bottle

SimpleSteps.guide branded stainless steel water bottle

Developer Excuse Javascript Hoodie - Light

Developer Excuse Javascript Hoodie - Light

Developer Excuse Javascript Hoodie - Dark

Developer Excuse Javascript Hoodie - Dark

© 2025 SimpleSteps.guide
AboutFAQPoliciesContact