*This section is written in English.*

You will find in this section an attempt to gather general notes on biomedical image processing. Biomedical image processing is a relatively recent scientific domain at the interface of multiple scientific specialties including mathematics (geometry, information theory, signal analysis, optimization,...), computer sciences (implementation of algorithmic methods, parallel programming, user interface,...), physics (medical image device reconstruction, deconvolution, fluid mechanic, tracer properties,...), medicine (basic knowledge of anatomy, tissue properties, specific diseases biomarkers), biology (microscopy cell analysis, histology, animal models). We propose here an overview of this large domain spanning many important notions from the pixel to the patient.

Image processing is a very large research domain which includes satellite images analysis, handwriting character recognition, production chain quality assessement, stereoscopic vision for robots, security camera detection, image tagging, compression, etc...These notes specifically focus on the processing and analysis of biomedical images, which is a sub category of image processing. However, when one specific section deals with general image processing concepts, some examples for illustration purposes can be taken in other scientific domains, or even in every day life (like digital photography) when it helps understanding.

**Definition of a digital image, spacing, dimensions**

A typical 3D image is a function I which associates to a point X of coordinates (x,y,z) a scalar value usually called the intensity I(x,y,z). The set where the intensities live can be...

More generally, a n-D image is a function which associates to a n-uplet (x_{1},x_{2},...,x_{i},...x_{n}) in Ω a p-uplet (y_{1},y_{2},...,y_{i},...,y_{n}) belonging to a destination set I. For example, a movie can be considered as a 4D color image where the 4-uplet is (x,y,z,t) and the p-uplet is the RGB values of each point. In the case of a 2D image where the destination set is a scalar ensemble, an image is similar to a mathematical matrix.

A theoretical image has a dense coordinate ensemble, which means that the intensity function is defined for any point in the definition domain. In real life, the intensity function of a digital image is given on a finite subset of points only, which means that we have a discrete representation of the function. Usually, this subset of points is forming a regularly spaced grid, each node of which is called a **pixel** (in 2D) standing for "picture element" or more generally a **voxel** standing for "volume element".

For a given image, an origin point O and direction axis are to be defined: for a typical 3D image, they are usually called. The distance between two consecutive voxels along the direction is called the **spacing** along that particular direction and is usually noted S_{X}, S_{Y} or S_{Z}. The spacing is usually expressed in *mm* and it indicates the link between the digitized image and the physical object it represents in the real world. A microscopy image will typically have a spacing of the order of a micron (10^{-3}mm) in each direction, whereas an image of Manhattan skyline will have a spacing in each direction of the order of the meter (10^{3}mm). The spacing of an image is sometimes referred to as the image **sampling**, or as the image** resolution** (although this concept mostly designates the maximum separation power of an imaging device). The spacing is usually equivalent to the **voxel size** and is usually noted. When the image spacing is the same for all directions, it is called isotropic. As we saw previously, the intensity within one voxel region is constant. Note that for 2D images, when an image is to be displayed on a screen, printed or scanned, the spacing is also sometimes expressed in *dots per inch (dpi)* which simply indicates the number of pixels in one inch (1 inch = 25.4mm).