IMPROVEMENT PROCEDURE FOR IMAGE SEGMENTATION OF FRUITS AND VEGETABLES BASED ON THE OTSU METHOD

Currently, there are significant challenges in the classification, recognition, and detection of fruits and vegetables. An important step in solving this problem is to obtain an accurate segmentation of the object of interest. However, the background and object separation in a grayscale image shows high errors for some thresholding techniques due to uneven or poorly conditioned lighting. An accepted strategy to reduce segmentation errors is to select the channel of an RGB image with high contrast. This paper presents the results of an experimental procedure based on enhancing binary segmentation by using the Otsu method. The procedure was carried out with images of real agricultural products, both with and without additional noise, to corroborate the robustness of the proposed strategy. The experimental tests were performed using our database of RGB images of agricultural products under uncontrolled illumination. The results show that the best segmentation is achieved by selecting the Blue channel of the RGB test images due to its higher contrast. Here, the quantitative results are measured by applying the Jaccard and Dice metrics based on the ground-truth images as optimal reference. Most of the results using both metrics show an improvement greater than 45.5% in the two experimental tests.


INTRODUCTION
In marketing agricultural products, the quality evaluation is realized according to their properties such as color, shape, and size.These products are visually inspected and are time-consuming, thus generating high operational costs.This strategy is susceptible to human error, and it could present difficulties in standardizing the classification results.The selection error of an agricultural product can affect its quality and commercial value (Yuan et al., 2015;Belan et al., 2020;Mukhiddinov et al., 2022).A frequent problem identified in vision systems and intelligent applications, in processing images of agricultural products, is the uncontrolled illumination of the environment.This issue generates images with low contrast, saturated color areas, and problems with brightness and shadows.Consequently, these factors hide information for any inspection algorithm to provide a proper solution in product classification (Yuan et al., 2015;Alegre et al., 2016).Dim illumination can lead to a low signalto-noise ratio, which can affect an image with noisy pixels (Gonzalez et al., 2008;Russ, 2016).Also, uneven lighting can significantly hinder segmentation operations when the thresholding techniques, such as Otsu's method, are applied (Ng, 2006;Gonzalez et al., 2008;Yuan et al., 2015;Alegre et al., 2016).
Image segmentation is a crucial step for analyzing and interpreting acquired image in various fields such as medical imaging, agriculture, robotic vision, material science, geographical imaging, and more.(Sha et al., 2016;Lu et al., 2017;Goh et al., 2018;Resma et al., 2018;Song et al., 2020).To enhance the efficiency of a vision system, adequate lighting or an optimal segmentation algorithm is required to obtain the correct image for further processing.Consequently, a high-quality image can provide the necessary information for detecting and identifying objects, along with their relevant characteristics (Alegre et al., 2016).An ideal image for object segmentation is one in which the pixels representing the object of interest share similar brightness characteristics, distinct from the background (Gonzalez et al., 2008;Russ, 2016).Nowadays, numerous segmentation methods have been proposed due to the nonideal nature of real images caused by a multitude of variables.
Thresholding is a popular segmentation technique for several applications due to its simplicity and efficiency (Liu et al., 2015;Sha et al., 2016;Goh et al., 2018;Lei et al., 2019;Song et al., 2020).Basically, two types of thresholding processes exist: global thresholding and local thresholding.Global thresholding involves selecting a single value from the histogram of the entire image.On the other hand, local thresholding uses information from the gray intensity level of the histogram to determine several thresholding values.The implementation of global thresholding is simpler and easier, but its result rely on good illumination, i.e. uniform illumination (Ng, 2006;Yuan et al., 2015).Local thresholding methods yield the best results in images of texts or manuscripts with nonuniform illumination, but their algorithms require more computational processing, leading to slower processing (Gatos et al., 2006;Singh et al., 2012).
The Niblack (1986) and Sauvola et al. (2000) binarization algorithms are among the most sophisticated techniques that employ local thresholds.These proposed algorithms obtain several local thresholds for the neighborhoods or subregions of the image using a sliding window to calculate variables such as the mean, variance, and standard deviation.Gatos et al. (2006) proposed an adaptable methodology for the binarizing documents degraded by shadows, nonuniform illumination, low contrast, and other factors.In their proposal, they apply a pre-processing using a Weiner filter to estimate the background and apply thresholding using the Sauvola technique.Bradley et al. (2007) proposed an adaptive threshold search algorithm by calculating the mean in each local neighborhood.
In this approach, the value of the local threshold depends on local statistics, such as range, variance, or pixel neighborhood surface fit parameters.However, while these methods are tolerant to illumination changes, they can be sensitive to noise, potentially degrading the final segmentation output (Bataineh et al., 2017;Cheremkhin et al., 2019).Global thresholding is commonly used for its simplicity and speed in various applications, such as automated visual inspection, where the illumination conditions are controlled, i.e., uniform illumination (Sezgin et al., 2004;Ng, 2006;Liu et al., 2015).
Among the global thresholding techniques, Sahoo et al. (1988) concluded that the method described by Otsu Otsu (1979) is one of the best threshold selection methods for real-world images concerning uniformity and shape measures.This method selects one or several threshold values that maximize the interclass variations of the histogram.However, it is only optimal for setting thresholds of a histogram with bimodal or multimodal distribution, i.e., for images where object and background distributions are clearly defined in shape and size.Therefore, the Otsu method does not yield the desired results if the histogram has a unimodal or near-unimodal distribution (Liu et al., 2015;Lei et al., 2019).In Chávez et al. (2022), it was verified that thresholding algorithms obtain better results when the images show higher contrast between background and object.
Commonly, thresholding algorithms are used for product quality inspection using computer vision (Fan et al., 2021).The basic idea is to automatically obtain an optimal gray-level threshold value to separate the object of interest from the background.Thus, the threshold value can be obtained based on the gray level distribution in the histogram of a digital image (Goh et al., 2018;Resma et al., 2018;Lei et al., 2019).Aiming to improve the results of binary segmentation in fruits and vegetable images by applying the Otsu method under uncontrolled lighting environment, the following procedure is proposed under the premise that gray images with higher contrast yield better results in a binary segmentation process.The manuscript is structured as follows to corroborate this hypothesis.First, Section 2 describes the basic concepts, methods, and metrics.Next, two experimental tests with and without additional noise, are carried out, and two operations are applied by using the luminance equation and selecting the blue channel from all RGB images in Section 3. Results and Discussion are detailed in Section 4. Finally, Section 5 presents the conclusions.

THEORETICAL FOUNDATIONS
In this section, the principal concepts and techniques employed in this article are described in detail.The nomenclature used throughout the document was described in the framed nomenclature in this section.

DIGITAL IMAGE
An image is defined as a function f (x, y) in twodimension (2D), where x and y are spatial coordinates in the (x, y) plane.The function f (x, y) at any point is called the intensity of the image with a range of [0, L − 1], where L = 256.When the intensity values of f (x, y) are finite and discrete quantities, it can be defined as a digital image.This is composed of a finite number of elements called pixels, with a particular location (x, y) and a corresponding value (Gonzalez et al., 2008;Sundararajan, 2017).
Meanwhile, an RGB image can be composed with three main channels I RGB = [R, G, B], where R, G, and B are the pixel intensity in the Red, Green, and Blue images, respectively.Therefore, an image in the RGB color space is represented by three image components of size M × N with conventional brightness intensities between 0 and 255.Frequently, the term grayscale is employed in digital images, which means that each pixel value represents only the light intensity information.These grayscale images can be defined as , this function shows values from the darkest black to the brightest white, going through various levels of gray intensity.Here, G represents the maximum base value 2 k and k = 8 bits.That is, the gray value of the image is usually represented by the combination of eight binary numbers that constitute the value of a pixel (Gonzalez et al., 2008;Sundararajan, 2017).
An RGB image is converted to grayscale by adding the levels of each channel in different percentages of Red (30%), Green (59%), and Blue (11%).This procedure is based on how the human eye perceives the frequencies of the spectrum nearby to the light intensity of these primary colors (Bovik, 2009;Russ, 2016).This phenomenon and the weighting factors of each color component can be expressed in the mathematical equation of luminance, Therefore, the equation ( 1) is applied to each pixel in the RGB image to convert an image from RGB to grayscale.The result is a new matrix İg(x,y) of one byte per pixel that provides the luminance information.This weighting is a single intensity shade of gray (Bovik, 2009;Russ, 2016).Now, a binary image is a digital image that can be obtained based on the matrix İg(x,y) .The binary image I b(x,y) is a digital image that has two possible values for Normally, the colors used for the representation of a binary image are black and white, although any pair of colors can be used.So, one of the colors represents the background and the other represents the objects (Gonzalez et al., 2008;Umbaugh et al., 2023).

ADDITIVE WHITE GAUSSIAN NOISE
Additive White Gaussian Noise (AWGN) is based on a Gaussian noise model that allows random simulation of pixels with uniformly distributed values in an image.These values can be modeled in a histogram with a Gaussian function.Due to its mathematical low complexity and tractability, the Gaussian noise model is frequently used in various practical applications (Gonzalez et al., 2008;Bovik, 2009).This distribution can be described through its probability density function, This equation is determined by the mean µ and the variance σ 2 of a random variable x.This type of noise is difficult to eliminate completely in digital images given its characteristics.

IMAGE CONTRAST
Grayscale image contrast can be defined as the intensity difference between the highest and lowest gray levels in the image.In other words, if the image has an appreciable number of pixels with a high dynamic range, it is likely to display high contrast in its content.On the contrary, an image with a low dynamic range tends to have a washed-out or dull gray appearance (Gonzalez et al., 2008;Russ, 2016).In Fig. 1, two examples of grayscale synthetic images are illustrated with an AWGN noise vector (µ = 0, σ 2 = 0.001).Here, it is showed different contrast between background (outer square) and object (inner square).The image in Fig. 1a shows an image with a better grayscale contrast difference between the background and the object.Otherwise, the image in Fig. 1b shows a low contrast image between the background and the object.

OTSU'S METHOD
Otsu's method is one of the best threshold selection methods for real images in terms of uniformity and shape measures (Resma et al., 2018;Lu et al., 2017;Lei et al., 2019;Liu et al., 2015).However, Otsu's method employs a deep search to evaluate the criteria and to expand the within-class variation.As the number of classes in an image increases, so does the time constraint for multithreshold selection.The probabilities p(i) of two classes separated by a threshold t are represented by ω 0 and ω 1 .The means for each class can be defined as where the finite range [0, 256] of i ranges from 0 to t − 1 and from t to L − 1 for µ 0 and µ 1 respectively.This is evaluated for each p(i), which represents the probability of occurrence of the range values of i in the image histogram.The total average intensity of the image is defined as (5) By using discriminant analysis, the between-class variance of a thresholded image can be calculated as So, to calculate the optimal threshold value that maximizes the variance between classes for image segmentation, it would be as follows

SIMILARITY METRICS
Since a visual or graphic comparison is insufficient to evaluate the differences between images, two statistical metrics, Dice and Jaccard, are proposed to measure the results.These similarity metrics are commonly used in different works to evaluate segmentation performance (Zou et al., 2004;Taha et al., 2015;Chung et al., 2019).

Dice
The Dice Similarity Coefficient (DSC) is one of the most widely used metrics in the validation of binary segmentation (Taha et al., 2015;Zou et al., 2004).It allows to compare the similarity of resulting binary images A with a true reference image B. The DSC is calculated by the following equation where the symbol ∩ denotes the conjunction of two sets and |•| represents the cardinality of a set.

Jaccard
The Jaccard Similarity Index (JSI), in digital images, compares the pixels of two sets [A, B] to see which subset of pixels [ϕ, φ ] are shared and which are different (Kosub, 2019).Fundamentally, it is a measure of similarity for the two data sets, with a percentage range from 0 to 100 and normalized from 0 to 1.The more similar are two populations of pixels corresponding to A and B, the Jaccard index is closer to 1.The equation ( 11) is used to calculate the Jaccard index (Taha et al., 2015;Kosub, 2019).
Here the symbols ∩ and ∪ represent the conjunction and the union of two sets respectively.Whereas the symbol |•| denotes the cardinality of each set.

METHODOLOGY
In this section a detailed description about the employed methodology is presented based on two principal stages Image Acquisition and Experimental Procedure.The database in Image Acquisition was obtained from fruits and vegetables easily accessible in our environment.While, in Experimental Procedure was proposed two processing tests by employing the raw images and adding AWGN noise.

IMAGE ACQUISITION
The images to carry out the experimental tests were selected from our database consisting of 320 RGB images of size 1536 × 2048 pixels.The selected images were those with the greatest lighting problems.The selected products can be appreciated in Fig. 2 and their commercial names are white onion, caribbean chili, yellow apple, potato, yellow bell pepper, xcatik chili, and pear in that order.These images were captured in a natural lighting environment using an expanded polystyrene booth of size 50 cm × 80 cm × 50 cm, corresponding to the length, width, and height, respectively.A high-definition Logitech C270 webcam with a USB connection and a video capture resolution of 720 pixels and 30 frames per second was used.A maximum effective resolution of 3 Mega Pixels.The webcam was placed horizontally 50 cm from the object, as illustrated in Fig. 3a.Images were processed and analyzed using Matlab R2018a software running on Windows 10 Pro 64-bit operating system.An HP workstation with an Intel(R) -Xeon(R) E3-1226 v3 @ 3.30 GHz processor was used.Fig. 3b shows the image of the real scenario in which the test images were obtained.This image was captured with a mobile phone camera.Several samples were obtained with different conditions, however, the images obtained with internal natural lighting and the closed laboratory were used for this experiment.This caused the dividing line at the bottom of the cabin in the images obtained not to be perceived.

EXPERIMENTAL PROCEDURE
The experimental procedure was developed with two tests by processing the original images (I m ) and noisy images (I m + AW GN), these tests are labeled as e 1 and e 2 , respectively.For the case of the e 2 experimental test, it was considered to add AWGN noise to the selected RGB images to test the robustness of the proposed strategy.A general methodology is shown in the diagram of Fig. 4. In each experiment, e 1 and e 2 , a processing algorithm is applied where four resulting binary images are obtained ( İG , İO , ÏG and ÏO ).In the processing algorithm, sketched in Fig. 4, two intensity images are obtained from the selected RGB images.The first image İg was obtained by using the Matlab command rgb2gray, this command is based on the equation (1).The second image, expressed as Ïg , is obtained by selecting the Blue channel from the channel separation of the RGB images.This channel is selected because the Blue component presents a greater contrast in gray intensity between background and object.This fact suggests a greater separation between two pixel subsets ϕ and φ .Therefore, an accurate threshold t can be calculated to obtain a binary image with better segmentation.The images obtained from İg and Ïg can be seen in Fig. 5.  Later, the threshold values of İg and Ïg images are calculated by using the commands graythresh (GT h) and otsuthresh (OT h).Here, GT h calculates a global threshold t from the values in the grayscale of the image.Meanwhile, OT h calculates a global threshold t from count calculations in the histogram.Finally, the images are binarized with the threshold t obtained in the previous step.
The processing algorithm is applied in the same way to the images obtained in the e 2 experimental test.However, the variables obtained from this include the subscript n to note the difference.For example, grayscale images obtained by equation ( 1) and RGB channel selection can be expressed as İgn and Ïgn , respectively.These were not shown in this work due to the similarity with the images in Fig. 5.In a similar way to the experimental test e1, four binary images were obtained following the procedure described in Fig. 4. For the evaluation and comparison of the results obtained, the ground truth images shown in Fig. 6

FIRST EXPERIMENT
The first section of results to corroborate the improvement segmentation of fruits and vegetables with our proposal is described below.The Fig. 7a shows the images obtained of İG and İO from test e 1 .These images were obtained by applying the variants of the Otsu threshold technique GT h and OT h to the images shown in Fig. 5.Some samples of the row of images in Fig. 7a labeled with İG exhibit a poor segmentation.In other words, it is observed that most were severely affected in the background and object by lighting noise.Therefore, the segmentation is visually inaccurate.The least affected images were the bell pepper, xkatik chili and pear.These samples only present small shadows of pixels that are not part of the object.For its part, the row of images of İO were also mostly affected by undesired pixels between the background and object.The images of the onion and Caribbean chili were the least affected.The results of images ÏG and ÏO are showed in Fig. 7b from the images Ïg displayed in Fig. 5.It can be seen that the row of images of ÏG and ÏO show problems with extra pixels in the background, except for the onion that preserves a minimum proportion of black points in both cases.However, in all images small shadows of pixels can be observed.The results observed in Fig. 7b show a considerable improvement with respect to the results in Fig. 7a, where the GT h and OT h variants were applied.At first sight, there is a better quality in the segmentation of the object with respect to the background.

SECOND EXPERIMENT
Now, a second stage is realized following the same procedure described in Fig. 4 with the RGB images disturbed with AWGN noise with a µ = 0 and variance σ = 0.0002.The results of the experimental e 2 are shown in Fig. 8.The processed images İGn and İOn based on the conversion İgn are illustrated in Fig. 8a.Here, it can be seen that most of the images were additionally affected by AWGN and illumination noise.In the İGn row of images, bell pepper, xkatik chili, and pear were the least affected in the background.For the row of images in İOn , the onion was less affected in the background, but it was impacted by the loss of pixels that define the object.Fig. 8b shows the results of the ÏGn and ÏOn images obtained by applying GT h and OT h to the Ïgn images.In the row of ÏGn , it can be seen that most of the images were unperturbed in the segmented background.Only the onion image was highly affected in the background and object due to AWGN and lighting noise.In the case of the row of ÏOn images, the onion, and the caribbean chili were affected with pixels that contaminate the background of the image and some small shadows that are not part of the object.Fig. 8b shows a similar behavior to the previous case, e 1 , where these results obtained are better compared to those shown in Fig. 8a.

IMPROVEMENT EVALUATION
To measure the results of the experimental tests e 1 and e 2 , it is required to use some metrics such as Dice and Jaccard to evaluate them quantitatively.For this purpose, ground truth images shown in Fig. 6 were used.In these measurements of similarity of results using Dice and Jaccard metrics, four comparative tables of experimental test results e 1 and e 2 were obtained.
For this analysis, the symbol ρ represents the value of improvement that exists between the absolute difference of the error percentage values obtained from the JSI and DSC results using the formula % error = (V o − V r )/V r * 100.Where V o is the value obtained from JSI or DSC and V r is the valid value of 1 as the true reference.The improvement values ρ represent a comparative improvement difference of results of ÏG vs. İG , ÏO vs. İO , ÏGn vs. İGn , and ÏOn vs. İOn .
Table 1 exhibits the comparison of results obtained by using Dice and Jaccard metrics based on the images İG and ÏG , which were obtained from the experimental test e 1 .In this table, it is observed that the results obtained from the ÏG column are vastly better than those obtained from the İG column in most cases.This is because they are closer to the value of one and are interpreted as good segmentation.In addition, there is a difference in percentage improvement ρ 1 greater than 73% in most of the results, except for bell pepper, chile xkatik, and pear, which present very small differences in percentage improvement ρ 1 that are too small, less than 7%.However, they are good results because their metric values are very close to one.In Table 2, the İO and ÏO results obtained with the Dice and Jaccard metrics are compared.Here, it is observed that the ÏO values are better than the İO results because they almost reach the perfect value.The Caribbean pepper values show the smallest improvement difference ρ 2 using OT h.Onion and potato 2 show an acceptable percentage improvement difference ρ 2 , with a mean of 20.58% for Dice and 29.83% for Jaccard.The results for apple and chile xkatik in this table show a ρ 2 improvement difference of more than 50%.The best results are obtained with potato 1, bell pepper and pear, with a difference in improvement ρ 2 greater than 70%.This is reflected in the second row of the images in Figs.7a and 7b.
For the second experiment e 2 , the Table 3 collected the results obtained with the Dice and Jaccard metrics.As in experimental test e 1 and the results of the previous tables, it is observed that the results obtained using DSC and JSI to the ÏGn images are significantly better than the results of the İGn images.Now, there is a percentage improvement difference ρ 3 of more than 75% in the Caribbean chili, apple, potato1 and potato 2. For xkatik chili and pear, the differences in improvement ρ 3 are small, but better in segmentation quality because they present values close to one in the DSC and JSI results in ÏGn and İGn .The onion case presents a very small ρ 3 improvement difference, but of low quality in the segmentation due to its values close to zero.This example corroborated that most of the AWGN noise and bad lighting noise in the onion image could not be eliminated.This can be seen in the row ÏGn and İGn images in Fig. 8.
For its part, Table 4 shows that the results obtained from ÏOn are better than those processed with İOn .Most of the results of the Dice and Jaccard metrics for ÏOn are near to 0.9, which is considered a good segmentation.Opposite of this, the results of İOn obtained an average of 0.6, which is considered a poor segmentation.About the improvement difference ρ 4 , it was achieved a 73% in the case of potato 1, bell pepper, and pear respect to Dice.In the case of the apple and xkatik chili, a quite acceptable ρ 4 improvement difference of more than 58% was obtained.In the case of potato 2, a ρ 4 computes a 44% enhancement.This result is good because the values in ÏOn present high values, i.e. a good segmentation.In the case of onion, although it presents a small ρ 4 improvement difference, their results are not suitable.The caribbean chilli presents a ρ 4 improvement difference of more than 61%.Despite this, the ÏOn values are regular.This causes that an amount of pixels remain contaminating the background and the object, as shown in the row of the respective images İOn and ÏOn of Fig. 8. Table 5 shows the best results obtained from experiments e 1 and e 2 .In the comparison of the results of ÏG and ÏO of the experimental test e 1 , small differences can be observed between the best results equally distributed.Although most Dice and Jaccard results achieved an average value greater than 0.9, in the particular case of the onion, it presents the lowest values in the comparison of the experimental test e 1 .In the case of the experimental test e 2 , it can be observed that most of the results are good for Dice and Jaccard showing a small advantage for the ÏOn results.As in the test e 1 , the worst results were obtained by processing the onion image.Here, the metrics of ÏOn were regular and the images obtained for ÏGn were lousy.

CONCLUSIONS
The results of the experimental tests proposed show that the images with higher contrast can obtain a better segmentation despite lighting problems.This argument is supported with graphic and numerical evidence.Most of the results show an average percentage improvement difference greater than 45.5% in the two experimental tests.A small percentage improvement difference in the two experimental tests does not exhibit if the result is good or bad.This rating depends on the range in which the values obtained by the Dice and Jaccard metrics are located.
Based on the metrics of Dice and Jaccard in the test e 1 , both modalities of Otsu's method obtained the same number of best results.That is to say, the best result of segmentation varies according to the modality of Otsu's method and the characteristics of the image in the test e 1 .In the ÏGn and ÏOn results of the e 2 test, clearly the Dice and Jaccard metrics show that the otsuthresh modality obtained the most improvement results.
The results in general were satisfactory using this experimental procedure, but it would be required to work with different database images and with AWGN variations.Furthermore, for future work, it is intended to compare the proposed strategy with modern segmentation algorithms such as those based on artificial intelligence.

Fig. 3 :
Fig. 3: Schematic diagram of image capture using a webcam, natural lighting, and an expanded polystyrene base.b) An actual picture of the test image capture scenario with the characteristics specified in the previous diagram.
were used.These samples were manually segmented and approved by the experience of different professionals dedicated to digital image processing, such as Morales-Mendoza et al. (2012), Lopez-Ramirez et al. (2020), and Gonzalez-Lee et al. (2021).

Fig. 7 :
Fig. 7: Results obtained from the e 1 experimental test applying GT h and OT h on the İg and Ïg images.(a) İG and İO images, (b) ÏG and ÏO images.

Fig. 8 :
Fig. 8: Results obtained from the e 2 experimental test applying GT h and OT h on the İgn and Ïgn images.(a) İGn and İOn images, (b) ÏGn and ÏOn images.

Table 1 :
Comparison of results obtained by the JSI and DSC metrics applied to the rows of binary images resulting İG and ÏG of Fig.7in the experimental test e 1 .The ρ 1 results represent the improvement in segmentation obtained between ÏG and İG based on the error percentages of the JSI and DSC values.

Table 2 :
Comparison of results obtained by the JSI and DSC metrics applied to the rows of binary images resulting İO and ÏO of Fig. 7 in the experimental test e 1 .The ρ 2 results represent the improvement in segmentation obtained between ÏO and İO based on the error percentages of the JSI and DSC values.

Table 3 :
Comparison of results obtained by the JSI and DSC metrics applied to the rows of binary images resulting İGn and ÏGn of Fig. 8 in the experimental test e 2 .The ρ 3 results represent the improvement in segmentation obtained between ÏGn and İGn based on the error percentages of the JSI and DSC values.

Table 4 :
Comparison of results obtained by the JSI and DSC metrics applied to the rows of binary images resulting İOn and ÏOn of Fig. 8 in the experimental test e 2 .The ρ 4 results represent the improvement in segmentation obtained between ÏOn and İOn based on the error percentages of the JSI and DSC values.

Table 5 :
Comparison between the best results of the two experimental tests, e 1 and e 2 .First, the comparison of the ÏG vs. ÏO results using the DSC and JSI metrics.Then, the second comparison, ÏGn vs. ÏOn with the same metrics.