Keras for Regression
Keras Basics
2 min read
Published Nov 17 2025
Guide Sections
Guide Comments
Regression models predict continuous numeric values.
Examples:
- Predicting house prices
- Predicting temperature
- Forecasting sales
- Predicting age
- Predicting a score or rating
For this section, we will use the California Housing dataset, a real tabular dataset commonly used in ML.
We will cover:
- Loading the dataset
- Normalising the inputs
- Building a regression network
- Training it
- Evaluating it (MAE/MSE/RMSE)
- Predicting new values
Load the California Housing Dataset
The dataset comes from scikit-learn.
Check shapes:
There are 8 numerical features per house (median income, rooms, population, etc.).
Train/Test Split
We’ll split the dataset manually:
Normalise the Inputs
Neural networks must work with normalised features for stability.
Keras has a built-in preprocessing layer:
This layer:
- Computes mean and variance on training data
- Applies normalisation consistently during training and inference
Build a Regression Model
Simple MLP for regression:
Important details:
- No activation on final layer
- Regression networks predict raw values
Compile the Model
Regression uses:
- Loss: mean squared error (MSE)
- Metrics: mean absolute error (MAE)
MAE is the easiest to interpret (units are same as target).
Train the Model
This dataset trains very quickly.
Evaluate on Test Set
Typical results:
- MAE ≈ 0.40–0.55
- RMSE ≈ 0.55–0.70
This means your predictions are off by about:
- $0.40–$0.55 median house price units
- The dataset target is in $100,000s, so MAE ≈ 0.5 means ≈ $50,000 error
Making Predictions
Visualising Loss Curves
Improving Regression Models (Quick Tips)
Add more layers / units - Regression tasks often need deeper networks.
Add regularisation
Use Dropout
Use learning rate schedules
Train for more epochs - But watch for overfitting.
Try the Functional API - Useful for complex tabular models.














