Industrial image processing
At the heart of a camera is the camera chip. This is where the initial likeness of the scene is created. However, there is still a long way to go until the image can be shown on a monitor.
The camera chip contains photo cells – also known as pixels. Over a certain period of time the light that falls onto these cells is turned into energy. The more light that shines on a single pixel, the higher the energy. That means that the electrical energy (voltage) in one photo cell can be seen as the exact information for the level of brightness of the scene at the point of this pixel.
In the camera itself the voltage of the individual photo cells is transformed from an analogue signal into a digital signal after a certain period of time (the exposure time). Simply put, a small computer in the camera measures the voltage on the individual photo cells and compares the analogue value measured to a table of digital values.
Nowadays the so-called 8-bit quantisation distribution is most common. A bit is the smallest logical unit in a digital system and can have the value of either 0 or 1. If the possible combinations were simulated with 8-bit you would have 256 different combinations:
These combinations of 0s and 1s are transferred via a selected interface, e.g. USB or FireWire, to the computer that actually carries out the image processing. The image processing software interprets this combination of 1s and 0s as a "colour value".
The combination 0000 0000 stands for the colour value 0, i.e. black.
The combination 1111 1111 stands for the colour value 255, i.e. white.
The combination 0111 1111 stands for the colour value 127.
In this case the colours shown via the monitor depend on the camera and the image information that is transferred. When measuring mass-produced parts, black and white cameras are often used. Here the colour value 127 would be a medium grey.
Colour cameras work using colour channels, typically red, green and blue. 8 bits per colour channel are used to display the information. The colour value 127 would therefore be a medium red, a medium green or a medium blue.
To understand the real difference between the two camera chips you have to delve very deeply into the subject. Put simply, imagine that a CMOS chip is constructed like a RAM component (internal memory of a computer) where photo cells are stored. CMOS chips are usually equipped with lots of electronics. Every photo cell has its own amp and every photo cell can – theoretically – be controlled individually. As each transfer of data requires time, CMOS chips are mainly used where data transfer has to be fast.
However, this type of construction has one great disadvantage: every electronic component requires space – space that the light-sensitive cells cannot use. A CDD chip is constructed so that the largest possible amount of space on the chip is equipped with light-sensitive cells. This increases the image quality. Therefore, in measuring technology, CCD chips are used almost exclusively despite the possibilities offered by CMOS chips.
The camera interface depends a lot on the application. In this case you should always ask how much data is to be transferred in which period of time.
Here is an example for clarification purposes:
Camera chip resolution: 640 x 480 pixel
Camera type: 8-Bit, B/W
From these parameters it can be deduced that per pixel 1 byte = 8 bits to be sent.
640 x 480 = 307,200 bytes = 300 KB ≈ 0.3 MB
USB 2.0 has a data transfer rate of 60 MB/s. This means that
60 x 0.3 images/second
can be transferred, i.e. 200 images/second. This value is, however, very theoretical. In practice you can usually rely on 100 to 150 images per second.
However this formula does give you a basis for orientation.
The various lighting methods and techniques used in industrial image processing have developed over the years from high frequency fluorescent tubes, LED lighting and cold light sources through to laser diodes.
Today the most common lighting method is the LED. By arranging the LEDs, scenes can be lit in various ways and certain features can be emphasised or hidden.
Nowadays, geometric measurements are almost exclusively lit by background light or "transmitted light". To achieve this, the object is placed between the camera and the light source; in doing so we differentiate between directional and diffused light. This is demonstrated in the following two diagrams:
With every "normal" lens, objects that are further away from the lens appear smaller. The human eye also works according to this principle. A car that is coming towards us seems to get bigger the closer it comes. With a telecentric lens, this phenomenon is prevented in certain areas and no perspectives are created.
The software must be adjusted to the parameters of the task just like all other components in a measuring system. For a special machine manufacturer, the communication between cameras, software and hardware is an important aspect. For contract sorting, the inspection jobs have to be adjusted or created as quickly and as simply as possible.
Moreover, the image processing software must be able to carry out a programmed inspection as quickly as possible (today between 10 and 80 ms). For office staff the system provides an interface for a server connection and a direct connection to the ERP system. All these factors (and 1,000 more) must be taken into consideration when choosing an image processing system.
An image processing facility consists of at least two different hardware systems. There must be an industrial computer onto which the image processing software can be loaded. The parameters that must be taken into consideration depend very much on the software used.
The second important hardware component is the control unit. Here PLCs (Programmable Logic Controllers) from various manufacturers are used. Once again, it is important to find the ideal solution for the application in question.