Color to grayscale conversion simplifies image processing tasks based solely on intensity, reducing processing time. Conversion methods include averaging or proportional algorithms. Haar Wavelet, one of the simplest wavelets, is utilized for its ease of implementation. It’s instrumental in digital image compression, being integral to standards like JPEG-2000 and FBI’s WSQ method for fingerprint compression. Wavelets, through hierarchical decomposition, reconstruct images from basic elements, enabling efficient storage and transmission. When applied to an image, the Haar wavelet generates positive and negative values, providing valuable information for decision-making. The wavelet transform is separable, enabling one-dimensional transforms first horizontally and then vertically. The resulting transformed image contains micro (small-scale) and macro (global) information, with the latter being more significant for similarity analysis. Thresholding is applied to remove unwanted details, retaining dominant spectral properties.
AI
CH-2 – Image Processing – Part-2
Image processing encompasses image restoration, enhancement, and compression. It involves manipulating images already captured or generated, distinct from computer graphics, which generates synthetic images. Pixels in digital images have important relationships, including neighbors, adjacency, connectivity, regions, and boundaries. Pixels are considered connected if they share a similar criterion, such as grayscale value. Different types of adjacency include 4-adjacency, 8-adjacency, and mixed adjacency, the latter resolving ambiguities in the former. Paths between pixels can be defined based on adjacency type, such as 4-, 8-, or m-paths. Regions in images are connected sets of pixels, with boundaries defined as pixels with neighboring pixels outside the region. Edges, based on intensity discontinuities, are local concepts, while boundaries are global, forming closed paths.
CH-2 – What is an Image – Part-1
An image, in digital form, is a representation of a photograph, drawing, or text document. Mathematically described as a two-dimensional function, it consists of pixels, each with a specific brightness value. Digital images come in three basic file formats: Vector, Raster, and Meta. Vector images are defined by mathematical formulas, allowing for high quality and small file sizes. Raster images are composed of pixels, forming a dot-pattern image, suitable for photographs and scanned images. Meta images combine vector and raster information, useful for including text and annotations in raster images while maintaining quality.
CH-3 – Image Analysis – Part-5
Wavelet feature extraction involves transforming images into wavelet coefficients. The number of data elements per image is determined based on the image’s rows and columns, typically brought to a power of two. The total number of values after applying the wavelet is calculated. Classification employs a Dual Statistical Approach, combining parametric and non-parametric methods like MEAN, NORM, Variance, and KS TEST. The Minimum Distance Classifier, utilizing Euclidean, Normalized Euclidean, and Mahalanobis distances, aids in classifying unknown image data by minimizing distances between data and classes in multi-feature space.
CH-3 – Image Analysis – Part-4
Edge feature extraction involves two elements: Edge Map and Edge Direction. The Edge Map provides an image’s edge representation, following the Canny Edge Detector Algorithm. Edge Direction divides the total direction into four parts, calculated using Gaussian Values. Slope feature extraction comprises Slope Magnitude, Slope Direction, and Slope Signs. Slope Magnitude calculates maximum intensity change, Slope Direction determines the direction of this change, and Slope Signs categorize values as positive, negative, or zero, useful for classification. All features are calculated tile-wise and arranged into arrays for analysis, enabling detailed image characterization.