Featured image for the guide on developing an object detection model using Python, showcasing code snippets and object detection results.
Imagine a world where technology doesn’t just respond to commands but understands its surroundings like a human would. That’s the magic of object detection. This cutting-edge technology is revolutionizing how machines perceive and interact with the world, making it a cornerstone of modern innovation.
From autonomous vehicles that navigate busy streets by recognizing other cars, pedestrians, and road signs, to smart home devices that can differentiate between family members and strangers, object detection is everywhere. It’s the unseen hero behind a range of applications that impact our daily lives, making technology smarter and more intuitive.
In this blog post, we’ll journey through the fascinating world of object detection. We’ll uncover what it is, why it’s crucial for the technologies we use every day, and how it’s shaping the future. Plus, by the end of this article, you’ll be able to create your own object detection app using Python. Get ready to see how this remarkable technology is not just enhancing, but transforming the way we interact with the digital world.
Object detection is a powerful computer vision technology that allows machines to not only identify objects in images or videos but also find their exact location. Think of it as a system that can spot a dog in a photo or pick out a traffic sign in a video. It’s like giving machines the ability to see and understand what’s around them.
In simple terms, object detection involves two main tasks: figuring out what an object is and pinpointing where it is. This is done by drawing bounding boxes around objects and labeling them. For instance, a security camera using object detection might recognize and mark a person, a car, and a bicycle in its feed. This technology is crucial in many areas, from self-driving cars that need to recognize and react to their environment, to smart home systems that differentiate between different people or objects. It makes technology smarter and more capable of interacting with the real world.
While object detection, recognition, and classification are all related, they do different things:
Object Detection: Finds and identifies objects in an image or video, and shows where they are by drawing boxes around them. For example, it would detect all the cars, bicycles, and people in a street scene.
Object Recognition: Takes it a step further by identifying exactly what each object is. For instance, it can recognize that a dog is a Labrador. Recognition often comes after detection, adding more detail about each object.
Object Classification: Categorizes objects into broad groups. It tells you if something is a cat, a dog, or a car, but doesn’t locate it within the image. Classification usually happens before detection and recognition.
Understanding these differences helps clarify how they all work together to make technology more effective and responsive.
If you’re interested in developing an object detection model, there are a few essential prerequisites you’ll need to get started. Let’s break them down into two main areas: programming basics and the tools you’ll use.
Python is a versatile and beginner-friendly programming language that’s widely used in the field of machine learning and computer vision. It’s popular because of its simple syntax, readability, and a vast ecosystem of libraries and frameworks. If you’re new to Python, you’ll want to get comfortable with basic programming concepts like variables, loops, and functions. These fundamentals will help you navigate and implement object detection models more effectively.
Python’s popularity in machine learning and computer vision comes from its ease of use and the extensive support it offers through various libraries and frameworks. Its rich ecosystem allows you to focus on solving problems rather than dealing with complex programming details. Libraries such as TensorFlow and Keras provide powerful tools for building and training models, while OpenCV helps with image processing tasks. Python’s community support also means that you’ll find plenty of resources and tutorials to help you along the way.
To build an object detection model, you’ll need to get familiar with a few key libraries and tools:
Here’s a step-by-step guide to installing the necessary libraries:
python --version
3. Install Pip:
pip --version
If it’s not installed, follow the instructions here.
4. Install Required Libraries:
pip install tensorflow keras opencv-python numpy
To maintain an organized and conflict-free workspace, it’s recommended to set up a virtual environment.
venv module, you can install it by running:pip install virtualenv
2. Create a Virtual Environment:
python -m venv myenv
myenv with your preferred name for the environment.3. Activate the Virtual Environment:
myenv\Scripts\activate
source myenv/bin/activate
4. Install Required Libraries in the Virtual Environment:
pip install tensorflow keras opencv-python numpy
5. Deactivate the Virtual Environment:
deactivate
By following these steps, you’ll set up a solid foundation for developing your object detection model, ensuring you have all the necessary tools and a clean, organized workspace.
To build an effective object detection model, choosing and preparing the right dataset is crucial. Think of the dataset as the material your model learns from. Here’s a step-by-step guide to selecting and annotating a dataset.
Selecting the right dataset is the first step in creating a successful object detection model. Here’s a look at some popular options:
Once you have your dataset, you might need to add annotations if they’re not already included. Annotations are labels that indicate what’s in the image and where it’s located. Here’s how to handle this process:
pip install labelImg
By carefully choosing and annotating your dataset, you provide your object detection model with the high-quality data it needs to learn and perform well. This solid foundation will help your model accurately identify and locate objects.
Before you train your object detection model, you need to prepare your data. This involves augmenting the data to make it more diverse and splitting it into different sets for training and evaluation. Here’s a detailed look at these steps:
Data augmentation is a technique used to increase the variety of your training data by applying different transformations. This helps your model learn to handle various scenarios and improves its performance. Here’s why data augmentation is important and some common techniques:
Once you have augmented your data, the next step is to prepare it for training. This involves splitting the dataset and formatting it for use with machine learning libraries like TensorFlow/Keras.
To evaluate the performance of your model effectively, you need to divide your data into three main sets:
You’ll need to load and preprocess your data so it’s ready for training with TensorFlow/Keras. Here’s an example of how you might do this in Python:
import tensorflow as tf
# Example function to load and preprocess data
def load_and_preprocess_data(dataset_path):
# Load the dataset
dataset = tf.data.Dataset.list_files(dataset_path + '/*.jpg')
# Define a function to parse and preprocess each image
def preprocess_image(file_path):
# Load image from file
image = tf.io.read_file(file_path)
image = tf.image.decode_jpeg(image, channels=3)
# Resize image to desired size
image = tf.image.resize(image, [224, 224])
# Normalize image
image = image / 255.0
return image
# Apply preprocessing to each image
dataset = dataset.map(preprocess_image)
# Split dataset into training, validation, and test sets
train_size = int(0.8 * len(dataset))
val_size = int(0.1 * len(dataset))
test_size = len(dataset) - train_size - val_size
train_data = dataset.take(train_size)
val_data = dataset.skip(train_size).take(val_size)
test_data = dataset.skip(train_size + val_size)
return train_data, val_data, test_data
dataset = tf.data.Dataset.list_files(dataset_path + '/*.jpg')
This line creates a dataset consisting of file paths for all .jpg images in the specified directory. The dataset_path variable should point to the location of your images.
3. Define a Preprocessing Function:
def preprocess_image(file_path):
image = tf.io.read_file(file_path)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [224, 224])
image = image / 255.0
return image
This function performs several steps:
tf.io.read_file(file_path) loads the raw image data from the file.tf.image.decode_jpeg(image, channels=3) converts the raw data into an image tensor with three color channels (RGB).tf.image.resize(image, [224, 224]) changes the image dimensions to 224×224 pixels, which is a common size for model input.image / 255.0 scales pixel values to a range of 0 to 1, which helps the model learn better.4. Apply Preprocessing:
dataset = dataset.map(preprocess_image)
The map function applies the preprocess_image function to each image in the dataset, transforming all images as specified.
5. Split the Dataset:
train_size = int(0.8 * len(dataset))
val_size = int(0.1 * len(dataset))
test_size = len(dataset) - train_size - val_size
train_data = dataset.take(train_size)
val_data = dataset.skip(train_size).take(val_size)
test_data = dataset.skip(train_size + val_size)
take(train_size) retrieves the first portion of the dataset for training.skip(train_size).take(val_size) gets the next portion for validation.skip(train_size + val_size) gets the remaining data for testing.6. Return Data: The function returns the three datasets: train_data, val_data, and test_data, which are now ready for training, validating, and testing your model.
By following these steps, you ensure that your object detection model is trained on a diverse and well-prepared dataset, improving its ability to detect and classify objects accurately.
Choosing the right architecture for your object detection model is crucial as it determines how well your model will perform. Here’s a more detailed explanation of some popular architectures used in object detection:
Different object detection architectures have their unique strengths and weaknesses. Here’s an in-depth look at three widely used architectures:
Overview:
Strengths:
Weaknesses:
Applications:
Overview:
Strengths:
Weaknesses:
Applications:
Overview:
Strengths:
Weaknesses:
Applications:
Now that we’ve installed the required libraries and set up our virtual environment, let’s import everything we need to build our object detection model.
Create a Python script, object_detection_model.py, and import the required libraries:
import tensorflow as tf
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import zipfile
import pathlib
import pandas as pd
import cv2
import matplotlib.pyplot as plt
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from IPython.display import display
from IPython.display import Image
from sklearn.model_selection import train_test_split
Explanation:
tensorflow: For building and running the object detection model.numpy: For numerical operations, particularly with image arrays.cv2: OpenCV library for image processing.matplotlib.pyplot: For visualizing images and results.# Define the model URL and download it
MODEL_NAME = 'ssd_mobilenet_v2_fpnlite_320x320'
BASE_URL = 'http://download.tensorflow.org/models/object_detection/'
MODEL_FILE = MODEL_NAME + '.tar.gz'
PATH_TO_MODEL_DIR = 'models'
PATH_TO_FROZEN_GRAPH = PATH_TO_MODEL_DIR + '/' + MODEL_NAME + '/frozen_inference_graph.pb'
# Download and extract the model
if not pathlib.Path(PATH_TO_MODEL_DIR).exists():
pathlib.Path(PATH_TO_MODEL_DIR).mkdir(parents=True, exist_ok=True)
def download_model():
opener = urllib.request.FancyURLopener({})
opener.retrieve(BASE_URL + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, PATH_TO_MODEL_DIR)
download_model()
Here’s a detailed breakdown of the provided code for downloading and preparing a pre-trained model:
MODEL_NAME = 'ssd_mobilenet_v2_fpnlite_320x320'
BASE_URL = 'http://download.tensorflow.org/models/object_detection/'
MODEL_FILE = MODEL_NAME + '.tar.gz'
PATH_TO_MODEL_DIR = 'models'
PATH_TO_FROZEN_GRAPH = PATH_TO_MODEL_DIR + '/' + MODEL_NAME + '/frozen_inference_graph.pb'
MODEL_NAME: Specifies the exact model you want to use, in this case, the ssd_mobilenet_v2_fpnlite_320x320.BASE_URL: The base URL where the TensorFlow models are hosted. The model file will be appended to this base URL to form the complete URL for downloading.MODEL_FILE: The name of the file to be downloaded, constructed by appending .tar.gz to the MODEL_NAME. This file is a compressed archive containing the model.PATH_TO_MODEL_DIR: Local directory where the model files will be stored after download and extraction.PATH_TO_FROZEN_GRAPH: The specific path to the model file (frozen_inference_graph.pb) within the extracted directory structure. This file contains the TensorFlow graph for object detection.if not pathlib.Path(PATH_TO_MODEL_DIR).exists():
pathlib.Path(PATH_TO_MODEL_DIR).mkdir(parents=True, exist_ok=True)
models) exists. If it does not, it creates the directory.pathlib.Path(PATH_TO_MODEL_DIR).exists(): Checks if the directory exists.pathlib.Path(PATH_TO_MODEL_DIR).mkdir(parents=True, exist_ok=True): Creates the directory if it does not exist. parents=True allows the creation of parent directories if needed, and exist_ok=True prevents errors if the directory already exists.download_model Functiondef download_model():
opener = urllib.request.FancyURLopener({})
opener.retrieve(BASE_URL + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, PATH_TO_MODEL_DIR)
opener = urllib.request.FancyURLopener({}): Creates a URL opener object to handle the download.opener.retrieve(BASE_URL + MODEL_FILE, MODEL_FILE): Downloads the model file from the constructed URL (BASE_URL + MODEL_FILE) and saves it locally as MODEL_FILE.tar_file = tarfile.open(MODEL_FILE): Opens the downloaded .tar.gz file for reading.for file in tar_file.getmembers(): Iterates over each file in the tar archive. file_name = os.path.basename(file.name): Gets the base name of the file from the tar archive.if 'frozen_inference_graph.pb' in file_name: Checks if the current file is the one you need (i.e., frozen_inference_graph.pb).tar_file.extract(file, PATH_TO_MODEL_DIR): Extracts the model file into the specified directory (PATH_TO_MODEL_DIR).download_model Functiondownload_model()
download_model function to start the downloading and extraction process.This code automates the process of downloading a pre-trained model and preparing it for use.
Load the pre-trained model into your script:
load_model Functiondef load_model(model_name):
base_path = pathlib.Path(PATH_TO_MODEL_DIR)/model_name
model_dir = str(base_path)
model_file = str(base_path/'frozen_inference_graph.pb')
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.compat.v1.GraphDef()
with tf.io.gfile.GFile(model_file, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
return detection_graph
The provided code snippet is for loading a pre-trained TensorFlow model into your script. This involves reading the model’s graph definition and importing it into a TensorFlow Graph object. Here’s a step-by-step breakdown:
base_path: Constructs the path to the model directory by joining PATH_TO_MODEL_DIR and model_name. It uses pathlib.Path for better path handling. PATH_TO_MODEL_DIR is 'models' and model_name is 'ssd_mobilenet_v2_fpnlite_320x320', then base_path would be models/ssd_mobilenet_v2_fpnlite_320x320.model_dir: Converts base_path to a string. This is the directory where the model files are located.model_file: Constructs the path to the model file (frozen_inference_graph.pb) within the model directory.detection_graph: Creates a new TensorFlow Graph object. This will hold the model’s graph definition.with detection_graph.as_default(): Sets the detection_graph as the default graph for operations within this block.od_graph_def = tf.compat.v1.GraphDef(): Creates a new instance of GraphDef, which is used to hold the serialized graph definition.with tf.io.gfile.GFile(model_file, 'rb') as fid: Opens the model file in binary read mode.serialized_graph = fid.read(): Reads the contents of the file into a byte string.od_graph_def.ParseFromString(serialized_graph): Parses the byte string into a GraphDef object. GraphDef is a protocol buffer that represents the TensorFlow computation graph.tf.import_graph_def(od_graph_def, name=''): Imports the graph definition into the current default graph. The name='' argument specifies that no prefix should be added to the names of the nodes in the graph.return detection_graph: Returns the loaded graph, which now contains the pre-trained model.detection_graph = load_model(MODEL_NAME)
load_model function with the MODEL_NAME to load the model into the detection_graph object.Load the pre-trained model into your script:
load_model Functiondef load_model(model_name):
base_path = pathlib.Path(PATH_TO_MODEL_DIR) / model_name
model_dir = str(base_path)
model_file = str(base_path / 'frozen_inference_graph.pb')
def load_model(model_name): load_model that takes one parameter: model_name. This parameter is expected to be the name of the model you want to load.base_path = pathlib.Path(PATH_TO_MODEL_DIR) / model_name pathlib.Path to create a path object.PATH_TO_MODEL_DIR is a variable that holds the base directory where model files are stored (e.g., 'models').model_name is appended to PATH_TO_MODEL_DIR to create a path to the specific model directory.PATH_TO_MODEL_DIR is 'models' and model_name is 'ssd_mobilenet_v2_fpnlite_320x320', base_path would be Path('models/ssd_mobilenet_v2_fpnlite_320x320').model_dir = str(base_path) base_path to a string. This string representation is often used for file operations.model_file = str(base_path / 'frozen_inference_graph.pb') 'frozen_inference_graph.pb' is the filename of the actual model file (a protocol buffer file containing the model’s graph). detection_graph = tf.Graph()
detection_graph = tf.Graph() Graph object. This object will be used to hold the model’s computation graph. with detection_graph.as_default():
od_graph_def = tf.compat.v1.GraphDef()
with tf.io.gfile.GFile(model_file, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
with detection_graph.as_default(): detection_graph as the default graph for operations within this block. This means any operations defined within this block will be added to detection_graph.od_graph_def = tf.compat.v1.GraphDef() tf.compat.v1.GraphDef(). GraphDef is a protocol buffer used by TensorFlow to represent the computation graph. The compat.v1 module ensures compatibility with TensorFlow 1.x code.with tf.io.gfile.GFile(model_file, 'rb') as fid: tf.io.gfile.GFile is TensorFlow’s file I/O utility that works across different file systems.serialized_graph = fid.read() serialized_graph).od_graph_def.ParseFromString(serialized_graph) GraphDef object (od_graph_def). This converts the serialized model graph into a format that TensorFlow can use.tf.import_graph_def(od_graph_def, name='') GraphDef into the default graph (detection_graph). The name='' argument specifies that no prefix should be added to the names of the nodes in the graph. return detection_graph
return detection_graph detection_graph object that now contains the loaded model. This allows you to use the model for inference or further processing.detection_graph = load_model(MODEL_NAME)
detection_graph = load_model(MODEL_NAME)
load_model function with the specified MODEL_NAME (which should match the name of the model directory). The returned detection_graph will contain the loaded model.Prepare your dataset. For simplicity, assume images are in a folder images/:
load_image_into_numpy_array Functiondef load_image_into_numpy_array(image_path):
return np.array(cv2.imread(image_path))
def load_image_into_numpy_array(image_path): load_image_into_numpy_array that takes one parameter: image_path. This parameter should be the file path to an image.return np.array(cv2.imread(image_path))cv2.imread(image_path): Uses OpenCV’s imread function to read the image file specified by image_path. This function loads the image from the specified path and returns it as an image object in OpenCV.np.array(...): Converts the image object returned by cv2.imread into a NumPy array. In NumPy, an image is represented as a 3D array where the dimensions are (height, width, channels). For example, an RGB image will have three channels.IMAGE_PATHS = ['images/your_image.jpg']
IMAGE_PATHS = ['images/your_image.jpg'] IMAGE_PATHS that contains the paths to the images you want to process. In this case, it’s just one image located at 'images/your_image.jpg'.images = [ ... for path in IMAGE_PATHS]
path in the IMAGE_PATHS list and applies the load_image_into_numpy_array function to each path.load_image_into_numpy_array(path)
path in IMAGE_PATHS, this function is called, which reads the image from the file and converts it into a NumPy array.images = [...]
images.Use the model to make predictions on your images:
detect_objects Functiondef detect_objects(image_np):
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: np.expand_dims(image_np, axis=0)})
return boxes, scores, classes, num_detections
def detect_objects(image_np): detect_objects that takes image_np, a NumPy array representing the image, as input.with detection_graph.as_default(): detection_graph. This is necessary because TensorFlow uses graphs to encapsulate operations.with tf.Session(graph=detection_graph) as sess: sess with detection_graph as the default graph. This session will be used to run operations.image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') boxes = detection_graph.get_tensor_by_name('detection_boxes:0') scores = detection_graph.get_tensor_by_name('detection_scores:0') classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') sess.run([...], feed_dict={image_tensor: np.expand_dims(image_np, axis=0)}) [boxes, scores, classes, num_detections].np.expand_dims(image_np, axis=0): Adds an extra dimension to the image array to match the expected input shape of the model. The model expects a batch of images, so adding this dimension creates a batch of size 1.return boxes, scores, classes, num_detections visualize_boxes_on_image Functiondef visualize_boxes_on_image(image_np, boxes, scores, classes, min_score_thresh=0.5):
im_height, im_width, _ = image_np.shape
for i in range(boxes.shape[1]):
if scores[0][i] > min_score_thresh:
box = tuple(boxes[0][i].tolist())
(left, right, top, bottom) = (box[1] * im_width, box[3] * im_width,
box[0] * im_height, box[2] * im_height)
cv2.rectangle(image_np, (int(left), int(top)), (int(right), int(bottom)), (0, 255, 0), 2)
cv2.putText(image_np, f'Score: {scores[0][i]:.2f}', (int(left), int(top - 10)),
cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)
return image_np
def visualize_boxes_on_image(image_np, boxes, scores, classes, min_score_thresh=0.5): visualize_boxes_on_image to draw bounding boxes and scores on the image.im_height, im_width, _ = image_np.shape _ is a placeholder for the number of channels (e.g., RGB).for i in range(boxes.shape[1]): boxes tensor.if scores[0][i] > min_score_thresh: min_score_thresh).box = tuple(boxes[0][i].tolist()) [0, 1] and need to be scaled to image dimensions.(left, right, top, bottom) = (box[1] * im_width, box[3] * im_width, box[0] * im_height, box[2] * im_height) cv2.rectangle(image_np, (int(left), int(top)), (int(right), int(bottom)), (0, 255, 0), 2) (0, 255, 0) specifies the color (green), and 2 specifies the line thickness.cv2.putText(image_np, f'Score: {scores[0][i]:.2f}', (int(left), int(top - 10)), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2) return image_np for image in images:
boxes, scores, classes, num_detections = detect_objects(image)
output_image = visualize_boxes_on_image(image, boxes, scores, classes)
plt.imshow(output_image)
plt.show()
for image in images:
images list.boxes, scores, classes, num_detections = detect_objects(image)
detect_objects function to get detection results for the current image.output_image = visualize_boxes_on_image(image, boxes, scores, classes)
visualize_boxes_on_image function to draw bounding boxes and scores on the image.plt.imshow(output_image)
plt.show()
Here is an image of containing objects like cars and people, here’s how the output look:
To determine how well your object detection model performs, it’s crucial to evaluate it using various performance metrics and by testing it on new data. Here’s a detailed explanation of the process:
Precision:
Recall:
F1-Score:
mAP:
To understand the model’s performance on unseen data, you need to evaluate it with a new dataset that wasn’t used during training.
When developing object detection models, it’s essential to follow best practices to ensure your models perform well and efficiently. Here are detailed explanations for some of the most common challenges and their solutions, as well as tips for optimizing model performance.
imbalanced-learn in Python.fit method.Recap of Key Points Covered:
To deepen your understanding of object detection, here are some recommended resources:
These resources offer in-depth knowledge and insights into deep learning, computer vision, and object detection, providing a solid foundation for further exploration and development in this field.
Object detection identifies and locates objects within an image, providing bounding boxes and labels for each detected object. Classification, on the other hand, only determines the presence or absence of objects without locating them.
The choice depends on your requirements for speed and accuracy. YOLO is fast but less accurate for small objects, SSD offers a balance between speed and accuracy, and Faster R-CNN provides high accuracy but is slower.
Use higher resolution images, multi-scale training, and feature pyramid networks (FPNs) to improve the detection of small objects. These methods help the model to better identify and localize small objects within the image.
Model quantization reduces the precision of model weights, typically converting them from 32-bit floating-point to 8-bit integers. This process reduces the model size and speeds up inference, making it more efficient for deployment.
Use metrics like precision, recall, F1-score, and mean Average Precision (mAP) to evaluate your model’s performance. Testing the model on new, unseen data and visualizing the results with tools like OpenCV can also provide insights into its accuracy and effectiveness.
After debugging production systems that process millions of records daily and optimizing research pipelines that…
The landscape of Business Intelligence (BI) is undergoing a fundamental transformation, moving beyond its historical…
The convergence of artificial intelligence and robotics marks a turning point in human history. Machines…
The journey from simple perceptrons to systems that generate images and write code took 70…
In 1973, the British government asked physicist James Lighthill to review progress in artificial intelligence…
Expert systems came before neural networks. They worked by storing knowledge from human experts as…
This website uses cookies.