![]() ![]() The transition is not as close to a step function as you would want it to be. In comparing the color image (Left), if you expand the picture, you can see that the middle of the E is wider. So how do they compare when we blow them up to the pixel level after we take the monochrome output from the color camera and compare it to the monochrome camera? (Left) – Color Image - (Right) – Monochrome Image The images were taken with the same exposure and identical setup. What does all this mean in real world applications? Let’s take a look at a 2 images, both from the same brand of camera where one is the using the 5MP Sony Pregius IMX250 monochrome sensor, the other is using the same sensor, but the color version. On the other hand, if we had an image with a ramp of pixel values, in other words, each pixel was say 1 value less than the one next to it, the average of the the nearest neighbors would very close to the pixel it was replacing. Since the output of each pixel is based on it’s nearest neighbors, you could be replacing a black pixel with 4 white ones! The quality of the output is dependent upon the original image and the algorithms used to compute the output.Īn image such as the above will give an algorithm a hard time as you are flipping between grey scale values of 0 and 255 for each pixel (assuming the check board lines up with each pixel). To get monochrome out, one technique is to have the image broken down into Hue, Saturation, and Intensity, with the intensity taken as the grey scale value. The accuracy of the color on these cameras is a result of what the original image was, and how the camera algorithms interpolated the set of red, green and blue values for each pixel. To get a color image out, each pixel out is a computation of a weighted sum of its nearest neighbor pixels which is known as Bayer interpolation. (The eye being most sensitive in Green, has more to simulate the response). For each group of 4 pixels, there are 2 greens, 1 red and 1 blue pixel. Non 3-CCD cameras use a Bayer filter, which is a matrix of red, green, and blue filters over each pixel. But the question is, how much data is lost by using a color camera in mono mode?įirst, the user must understand how a color camera works, and how it gets its picture. ![]() In these applications, you could take out a monochrome image from a color sensor for processing, and use the color for cataloging and visualization. For instance, if you are looking at PCBs, need to read characters with good precision, but also see the colors on a ribbon cable, you are forced to use a color camera. Many clients call us about doing measurements on grey scale data, but want to use a color machine vision industrial camera because they want the operator or client to see a more ‘realistic’ picture. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |