Thursday, June 20, 2019

Sewing Machine combined by camera

The textile specimen we consider have a size of 1m x 1m. Simultaneously, the spatial accuracy of detected stitch positions needs to be in the order of 50µm to account for very thin threads. In a real world scenario, both requirements can only be combined by using a commercially available camera sequentially scanning individual parts of the specimen. Afterwards, all of the acquired tiles are composed to obtain a unified, large RGB image, C(x, y) = (R(x, y), G(x, y), B(x, y))T , where (x, y) denotes the pixel position. Figure 2 shows the real world system for image acquisition and quality inspection. A camera is mounted on a gantry robot, allowing to automatically translate the camera in 3D space. The specimen to inspect is placed on the floor below. The camera has a sensor size of 2332 x 1752 pixels. Considering the employed lens, this translates to a resolution of 8 pix mm . However, the pixel resolution can be dynamically changed by decreasing or increasing the camera height over the conveyor plate using the gantry robot. Once the measurement has been started, the specimen is scanned in equidistant intervals resulting in a set of tile images which need to be composed. The image composition is performed using standard image registration techniques [8]. 
juki usha singer merritt sewing machine price list showroom in chennai

An image processing pipeline was designed for the thread detection. At all times, knowledge about the desired thread pattern model is available. A. Two Partial Luminance Images The composed camera RGB-image is converted to a singlechannel luminance image L(x, y) = 0.3R(x, y)+ 0.59G(x, y)+ 0.11B(x, y). (1) The conversion is based on the assumption, that the thread appears either brighter or darker than the tissue background. However, it is unknown in advance which appearance is given. Therefore, two partial images are generated. The positive image I+(x, y) only contains pixels brighter than the tissue mean, whereas the negative image I−(x, y) only contains pixels darker than the mean, I+(x, y) = max(0, L(x, y) − m) (2) I−(x, y) = max(0, m − L(x, y)) (3) m = mean(L(x, y)). (4) By separation into partial images, the pixels representing the thread will only be visible in one of them. They will always appear as a bright structure. Next to the actual thread pixels, there will also appear spurious pixels from noisy tissue structures. They resemble thread-like structure parts and are stochastically distributed. best sewing machine dealers in chennai
B. Frangi Filtering A Frangi filter [3] is applied on both partial images. The filter operation basically consists of a pixel-wise computation of the Hessian-Matrix and a Gaussian-shaped smoothing kernel. It is also known as a vesselness filter and was originally introduced in the context of medical image analysis to emphasize pixels that are embedded in vessel-like structures. However, an elongated and thin appearance is not only characteristic for vessels inside the human body but also for the considered thread within this work. It is therefore natural to adopt the established methodology for the given task. The result are two images, IF r+ and IF r−, with each pixel value containing the probability that it is embedded in a thread-like structure.
C. Selection of the Thread Image Based on the supplied model prior, an approximate number of expected thread pixels can be estimated. This expectation can be turned into a thresholding operation, that is performed on both images. Since one of the filtered images contains both the thread as well as background noise, while the other image only contains background noise, a robust detection of the thread image is straight forward. The result is a single binary image, wF r(x, y), with a pixel value of 1 indicating a thread pixel.
D. Classification of Pure Thread Pixels The previous step provides a mask for pixels that are embedded in a thread-like, i.e. elongated and thin, structure. Yet, pixels lying not directly on the thread but nearby may be included. Therefore, the mask is refined using an expectation maximization (EM) algorithm [1]. The refinement is no longer performed on the luminance image, but on the RGB-image. As initialization, the tissue RGB values from the background removal step are taken for the tissue mean and covariance values. The thread mean and covariance values are derived from all pixels masked by wF r. The iterative EM algorithm results in a single binary image, wges(x, y), with a pixel value of 1 denoting apure thread pixel.

No comments:

Post a Comment