AI

CH-2 – Image Processing – Part-2

Image processing encompasses image restoration, enhancement, and compression. It involves manipulating images already captured or generated, distinct from computer graphics, which generates synthetic images. Pixels in digital images have important relationships, including neighbors, adjacency, connectivity, regions, and boundaries. Pixels are considered connected if they share a similar criterion, such as grayscale value. Different types of adjacency include 4-adjacency, 8-adjacency, and mixed adjacency, the latter resolving ambiguities in the former. Paths between pixels can be defined based on adjacency type, such as 4-, 8-, or m-paths. Regions in images are connected sets of pixels, with boundaries defined as pixels with neighboring pixels outside the region. Edges, based on intensity discontinuities, are local concepts, while boundaries are global, forming closed paths.

AI

CH-2 – What is an Image – Part-1

An image, in digital form, is a representation of a photograph, drawing, or text document. Mathematically described as a two-dimensional function, it consists of pixels, each with a specific brightness value. Digital images come in three basic file formats: Vector, Raster, and Meta. Vector images are defined by mathematical formulas, allowing for high quality and small file sizes. Raster images are composed of pixels, forming a dot-pattern image, suitable for photographs and scanned images. Meta images combine vector and raster information, useful for including text and annotations in raster images while maintaining quality.

AI

CH-3 – Image Analysis – Part-5

Wavelet feature extraction involves transforming images into wavelet coefficients. The number of data elements per image is determined based on the image’s rows and columns, typically brought to a power of two. The total number of values after applying the wavelet is calculated. Classification employs a Dual Statistical Approach, combining parametric and non-parametric methods like MEAN, NORM, Variance, and KS TEST. The Minimum Distance Classifier, utilizing Euclidean, Normalized Euclidean, and Mahalanobis distances, aids in classifying unknown image data by minimizing distances between data and classes in multi-feature space.

AI

CH-3 – Image Analysis – Part-4

Edge feature extraction involves two elements: Edge Map and Edge Direction. The Edge Map provides an image’s edge representation, following the Canny Edge Detector Algorithm. Edge Direction divides the total direction into four parts, calculated using Gaussian Values. Slope feature extraction comprises Slope Magnitude, Slope Direction, and Slope Signs. Slope Magnitude calculates maximum intensity change, Slope Direction determines the direction of this change, and Slope Signs categorize values as positive, negative, or zero, useful for classification. All features are calculated tile-wise and arranged into arrays for analysis, enabling detailed image characterization.

AI

CH-3 – Image Analysis – Part-3

Parametric data follows a specific distribution with consistent distances from the mean, while non-parametric data lacks a specific distribution. Mean frame histogram data, being non-parametric, employs the KS Test. Four features are chosen for extraction: Histogram, Edge, Slope, and Wavelets. Histogram feature extraction involves arranging data in columns and addressing the issue of identical mean values in different distributions by weighting the matrix with a column vector. This process ensures unique results, crucial for decision-making, as mean and variance pairs differ, offering valuable insights.