Automated Glass Fracture Analysis Using Vision and Labview 8.2

Peter Andrew Bennett


Abstract

Glass produced for use in vehicles must undergo a wide range of quality checks to ensure that it is safe to use. Within Europe, the ECE regulations (GRSG 2007) govern which tests are conducted and the criteria which must be met for a glass batch to be deemed acceptable. One such test is the fragmentation test, which involves fracturing a glass panel using an impact punch and then obtaining information about the count and size of the glass fragments in the resulting fragmentation pattern. This report describes the design, implementation and testing of a piece of vision software created in Labview which will perform fragmentation analysis automatically. Existing methods detailed by other authors are investigated, and implemented into the software to produce an effective end product.

Introduction

The fragmentation test involves fracturing a toughened safety glass panel by applying a force at one of several predefined impact points; the resultant fracture pattern can then be observed and the characteristics of the individual glass fragments recorded. Information such as the particle (fragment) count within a fixed size square region, or the length and area of individual particles can be recorded and used to determine if that batch of glass is safe for use. The Economic Commission for Europe regulations state that the minimum particle count in any five by five centimetres square is forty, and that there should be no elongated particles larger than 7.5 centimetres (GRSG 2007).

This report provides details on the production of a piece of software which will automate the counting and measuring stage of the fragmentation test. Software which is capable of automating this process is desirable as it will not only improve the accuracy of the results by eliminating human error, but also speed up the process considerably. To test the software, 8-bit greyscale images of fractured, toughened glass acquired from a line scan acquisition camera are used. The software was written in Labview 8.2 as it features powerful graphical programming interface which can greatly reduce development time and it also has the ability to use external code written in a different language such as C.

Image Segmentation

The purpose of segmentation is to partition the image into meaningful regions based on properties such as pixel intensity, so that the image is easier to analyse. The most basic method of segmentation is known as thresholding, which simply assigns pixels to either the object or background set according to each pixel’s intensity (Jain, Kasturi and Schunck 1995).

In a typical fragmentation sample, such as that shown in figure 1, pixels belonging to glass fragments have a higher intensity than those belonging to the fracture lines. An effective segmentation algorithm would classify the glass fragments (the lighter grey parts of the image) as objects, and the darker boundary lines as background. It is easy for a human to perform this task by inspection; however it is slightly harder to get a machine to perform the same task reliably across a range of images.


Figure 1 – Fragmentation Sample

Other publications discussing fragmentation analysis (Haywood et al. 1997)(Xueqin and Xiaohong 2007)(Gordon 1996) choose to use some form of edge detection operator, such as the Sobel operator, to highlight the fragment boundaries prior to applying a standard threshold. It was found, however, that the images used in this study were of good enough quality to simply apply an adaptive threshold algorithm directly, with no other image processing necessary prior to the threshold. The adaptive threshold algorithm used is Niblack’s algorithm (He et al. 2005) which uses the mean and standard deviation of a square region centred on each pixel in the image to determine the threshold value at that point.



Figure 2 – Threshold using Niblack’s algorithm

Left: Original image, Centre: Threshold, Right: Cleaned image

After Niblack’s algorithm is applied, small objects are removed using a small object filter and holes within fragments are filled using a hole fill operation. The quality of the threshold is generally good; however in many cases there are some fragments which are connected to their neighbours where the fracture lines were quite faint in the original image. The threshold and refinement process is shown in figure 2.

Fragment Refinement

Pixels in the original image may be incorrectly identified as object or background pixels by the segmentation algorithm because of image noise, or other effects such as contrast variation. Invariably there will be situations where the crack lines of the original image are not correctly identified and will be set to ‘object’ pixels rather than ‘background’ pixels. This will have the effect of joining some of the glass fragments in the image together, because the fracture line separating them is no longer present or may be incomplete. Broken fracture lines in the segmented image are undesirable, as they will cause some glass fragments to appear abnormally large when size measurements are taken and will therefore affect the results negatively. A further problem with incomplete fracture lines is that two or more fragments may only be counted as one fragment because they are joined together.

One method of separating joined objects in an image is to make use of the distance transform and watershed transform; this is the approach used by both (Xueqin and Xiaohong 2007) and (Gordon 1996). Initially the distance transform of the thresholded image is acquired; the distance transform assigns each pixel in the image a value based on its distance from the edge of the object to which it belongs (citation needed).



Figure 3 - Distance transform of multiple images

Left: Transform of convex object, Centre: Transform of concave object, Right: Transform of figure 2.

Any region within an object which is surrounded by pixels of lower intensity, otherwise known as the regional maximum (citation needed), represents one of the furthest points in that object from the object boundary. One useful property of the distance transform is that convex objects, such as that pictured to the left in figure 3, have only one regional maximum. However, concave objects, such as that shown in the centre of figure 3, will have multiple regional maxima. This property of the transform is useful because incorrectly joined fragments are generally concave in shape, whereas correct fragments are generally convex.

The next step of the process involves a second algorithm known as the watershed function (Roerdink and Meijster 2001) (Vincent and Soille 1991) (Vincent and Soille 1990) (Hahn and Peitgen 2003). The watershed function is an iterative process which expands objects in an image until they meet other objects (which are also expanding). A single pixel gap is left between object regions as they expand; this is known as the watershed line. When the watershed function is applied to the distance transform, the maximal regions of the image will form their own influence zones which will gradually expand until other regions are met. The result is the generation of watershed lines inside any object which contains two or more regional maxima, these lines are usually a good approximation of where the incorrectly joined objects must be split to obtain the original objects.



Figure 4 – Fragment refinement procedure.

Left: Original thresholded image, Centre: Distance transform, Right: Watershed transform

Many incorrect splits are also formed during the process; this is due to some objects having a small degree of concavity and therefore multiple regional maxima. Some tolerance will therefore need to be added to the process to prevent such objects from being split. This issue is a common problem with the watershed transform and is referred to as oversegmentation. (Eddins).



Figure 5 – Oversegmentation of a valid fragment

Left: Distance transform coloured to show contours clearly.

Right: Incorrect splits generated by watershed transform due to multiple regional maxima.

In order to correct the issue of oversegmentation greyscale reconstruction was used. Greyscale reconstruction attempts to merge any regional maxima within an object if they are not separated by height K, therefore K can be set to add a level of tolerance which will prevent segmentation occurring in valid objects yet still allow it to occur in objects which must be separated. Considering the distance transform as a topographical surface, greyscale reconstruction will merge similar peaks to form a plateau; however peaks with a sufficient height separation will not be merged in the process. The greyscale reconstruction algorithm developed by (Vincent 1993) was used in order to achieve this.

Figure 6 shows the effects of greyscale reconstruction with a K value of 2; notice that the objects which were incorrectly split in figure 4 are now intact.



Figure 6 – Greyscale reconstruction used to prevent oversegmentation.

Left: Distance transform coloured to show contours after the application of greyscale reconstruction.

Right: Objects are split correctly and no oversegmentation occurs.


Software Structure and Labview

The vision algorithms described previously, and more, were written in C and built into a Dynamic Link Library (DLL). A Labview application was then created with the ability to load and view images, as well as use the vision functions in the DLL to modify the image data. The figure below shows the software with a fragmentation sample loaded.



Figure 7 – The Labview software displaying a fragmentation sample.

In order to analyse a sample such as the one above, the software is first used to apply the image processing steps described previously – these steps convert the source greyscale image into binary form and then refine the resultant fracture pattern. When the image has been prepared it is then analysed to obtain area and count information.

Area information is obtained using the following method: every object in the refined image is labelled using an object labelling algorithm (the labelling algorithm used in this project is a contour tracing algorithm proposed by (Chang, Chen and Lu 2004), after every object is uniquely labelled the number of pixels belonging to each label group is found to obtain the area of every object in the image in pixels.

Count information is obtained by labelling the image, as above, and then sliding a square window of user specified size through the image in raster order. For every position in the image that this window assumes, the number of unique labels within is found in order to obtain the number of objects enclosed by the window at that position. When the entire image has been scanned, there will be a ‘count’ value associated with every pixel in the image; this corresponds to the number of objects contained by a square window placed at that position.

In order to present the information in a meaningful manner two reporting methods were used. The first is the use of histograms to show the count or area distribution of fragments across the entire image; the histograms can be used to determine if there are any results which fall outside of an acceptable range, and can therefore be used to determine if a batch of glass passes or fails the test. The second method involves the generation of two new images: the first image is the original refined image with the objects coloured based on their area, this allows the operator to easily identify the largest or smallest objects in the image. The second image is, again, the original refined image but every pixel is coloured based on the number of objects found in the window at that position. This allows the operator to easily see the areas on the fragmentation pattern where the highest concentration or lowest concentration of glass fragments can be found.



Figure 8 – The analysis software results screen showing colouration by size (far left), colouration by local count (left), object area histogram (upper right) and object count histogram (lower right).

Conclusion

The software has been tested across a range of images and has been able to identify which regions of a fractured glass sample contain the greatest or least number of fragments. The results views which show colouration by area of local count allow the user to rapidly identify problem areas on a glass sample and determine which regions cause a sample to fail the test. If used to support manual counting, the software could save the operator a great amount of time by showing them which parts of the panel they should be focusing on to find the largest or smallest fragment counts.

The image processing steps used prior to the image analysis stage produce good results for most images, however if the image contains excessive noise or obstructions (such as defroster cables) poor results will be given. The fragment line completion technique proposed by (Haywood et al. 1997) could be used rather than the technique using the distance transform and watershed transform, as it uses a different approach and may yield better results even for poor images. Analysis of the image may take a long time because of the way that fragment count information is gathered. This could be remedied by introducing a more intelligent counting technique rather than scanning every window position in the image in one go.

Acknowledgements

The author would like to thank Mr Simon Aldred from the Pilkington European Research Centre for providing the glass fragmentation images used in this project. The author would also like to thank Dr Alexei Nabok and Dr Aseel Hassan from Sheffield Hallam University for the support they provided with this project.

References

Chang, Fu, Chen, Chun-Jen and Lu, Chi-Jen (2004). A linear-time component-labeling algorithm using contour tracing technique. Computer vision and image understanding, 93 (2), 206-220.

Eddins, Steve, The watershed transform - strategies for image segmentation [online]. Last accessed on March 28th 2008 at:
http://www.mathworks.co.uk/company/newsletters/news_notes/win02/watershed.html.

Gordon, G. G. (1996). Automated glass fragmentation analysis. In: Proceedings of the SPIE, Machine Vision Applications in Industrial Inspection IV, February 1996. 244-252.

GRSG, (2007). Proposal to develop a global technical regulation concerning safety glazing materials for motor vehicles and motor vehicle equipment. GRSG-93-25. [online]. Last accessed on 1st May 2008 at: http://www.unece.org/trans/doc/2007/wp29grsg/ECE-TRANS-WP29-GRSG-93-inf25e.pdf.

Hahn, Horst K. and Peitgen, Heinz-Otto (2003). IWT - interactive watershed transform: A hierarchical method for efficient interactive and automated segmentation of multidimensional grayscale images. In: Proc. Medical Imaging, SPIE. February 2003. USA, SPIE, 643-653.

Haywood, J., et al. (1997). Automated counting system for the ECE fragmentation test. In: Glass Processing Days, 13-15 September, 1997. 366-370.

He, J., et al. (2005). A comparison of binarization methods for historical archive documents. In: Proceedings of the 2005 Eight International Conference on Document Analysis and Recognition, 2005. IEEE, 538-542.

Jain, Ramesh, Kasturi, Rangachar and Schunck, Brian G. (1995). Machine vision. McGraw-Hill International Editions.

Roerdink, B. T. M. J. and Meijster, Arnold (2001). The watershed transform: Definitions, algorithms and parallelization strategies. In: Fundamenta Informaticae, IOS Press, 187-228.

Vincent, Luc (1993). Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms. IEEE transactions on image processing, 2 (2), 176-201.

Vincent, Luc and Soille, Pierre (1991). Watersheds in digital spaces: An efficient algorithm based on immersion simulations. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, June 1991. IEEE, 583-598.

Vincent, Luc and Soille, Pierre (1990). Determining watersheds in digital pictures via flooding simulations. In: SPIE, Visual Communications and Image Processing, 1990. SPIE, 240-250.

Xueqin, Zhou and Xiaohong, Liu (2007). The detection and recognition algorithm of safety glass fragment. In: Mechatronics and Automation, 2007. ICMA 2007. International Conference on, August, 2007. IEEE, 963-967.