With Windows opens to the PC the ability to manipulate graphics, and then to practice the Vision. And then 'recent history and common knowledge ....
Vision The term means all or so and yet NOTHING! per se is to see a phenomenon or 'something' external to the device, capturing an image of the is happening externally and internally to the device process the captured image to provide a result.
emulates the vision system, so to speak, which already makes the man with the camera, but speed, reliability ', repetition, etc.. and ensuring a higher response in terms of automation, deterministic and infallible!
From here you can open a world on the vision that deserves specific distinctions in the various sub-sectors.
Making Vision is to take an image and process it for various effects, including vision is to do 3D rendering, making Vision also means taking a series of images and build a film or otherwise edit parts of movies, this is a branch of view that is called 'Video Editing', means taking an image to view and analyze it to provide a mathematical result or condition of good / bad.
The term 'Vision' and then collected many types of implementations.
The vision that is committed to capturing an image, analyze it properly and scoring algorithm or a pass / fail and 'now known as coma' Machine Vision '.
A Machine Vision System so that the electronic system is able to acquire an image from the field, bring it through memory and a CPU and a software application, analyze the image information to obtain an algorithm through which the system itself and 'able to take a decision accordingly. That in a nutshell and to give an example we can 'imagine a system with a camera and processing unit capable of analyzing the bottle caps and discard the defective ones.
The Machine Vision can 'then take a variety of customizations that depend on their jobs and areas of deployment, but the basic structure, which typically features a vision system remains a constant:
- ( CPU + memory, FPGA , DSP, more ....), units' processing
- Electronic Imaging
- Camera
- Optics
- Illuminator
- mechanical structure
The most common use of a system of vision is to check the completeness and quality of a product that all parts of an assembly are present and positioned correctly, that the area is free of scratches or defects, the filling level of a liquid is the desired height, ect.
The application fields for optical measurement system are the most varied, both in laboratory applications (measure specimens under tensile or compressive stress, a measure of bottles and plastic containers in the sample) is in production (spring length measurement, measure diameters of gaskets, measure distance between holes before a mount operation) . The current alternative to a vision system are: sensor laser profilometer, mechanical probes, manual measurement. Compared to the methods listed, a vision system has the advantage of being rapidly reconfigured can be used both in laboratory and in production, be scalable on a variety of hardware solutions with relative accuracy and response in terms of speed, repeatable and objective measures to be able to view the 'measured object and their shares in order to give immediate feedback to the operator.
You can not separate the concept of measurement and precision. In the case of an electrical quantity, accuracy of measurement depends on factors such as the linearization of response, signal to noise ratio, the number of bits of the digitizer, etc.. In the case of measurement with a vision system factors that influence the stability and accuracy of the measure are:
- The quality of image capture card (Frame Grabber)
- Optics ;
- quality and resolution camera's field of view and the view (FOV);
- lighting;
- The measurement software.
Electronic Imaging (Frame Grabber)
for electronic image acquisition means that electronic document acquisition signal from the camera that is taken and brought into memory as an image to be then, with more time 'or slower, drawn up to provide digital information that is looking for.
The device can 'then be housed in a PC in which case it is typically an electronic card on a PCI or PXI bus today or PCI / PXI Express and able to acquire signals from cameras that may be similar, as is the case currently in the market but a bit 'dated CCIR cameras / RS170 or PAL / NTSC or cards capable of acquiring signals from digital cameras to communicate over next-generation bus speed FireWire or CameraLink or GigE.
Often these devices suitable for the acquisition of the images are detached and independent from a PC and are completely independent of the stations are able to acquire and process the image to give a final result. A good example of 'the Compact Vision System (CVS) from National Instruments, architecture-based Motorola and Pharlap as an operating system capable of acquiring images from FireWire cameras and processing results automatically based on image processing routines with which 'was planned.
Optics
The role of optics is to map the real world sensor called a CCD camera and composed of small parts defined as "pixels" which will together capture the scene that stands in front of it. Nothing is free from optical distortion problems: sometimes the working conditions are such that they can ignore these parameters, and sometimes necessary to use optics designed to minimize geometric distortions (Telecentric optics). In many cases you must use the functions "correct" the image captured by software.
Camera The camera device is the actual image acquisition. Characterized by a CCD or CMOS sensor able to impress the effect of light in the scene framed photovoltaic on the array of sensors (pixels) and transformed in an electrical digitization process electronically. The sensors may have different definitions in terms of the number of pixels, and supports various methods of image transmission. Initially the signal was analog, according to the standard CCIR/RS170 or PAL / NTSC, for historical reasons arising from television, today has evolved to digital standards such as FireWire or GigE CameraLink fast.
resolution camera sensor
Often we feel pose this question: "What is the precision of an optical measurement system?" It 's a bit like asking which is the precision of an instrument to 14 bit? The correct answer is: "Expenses of operating range: 10V on you can enjoy 10 / 16384 = 610 microvolts. By analogy, in a vision system accuracy depends on the field of view of view (Field Of View - FOV) and the geometric resolution of sensor. For example, a sensor 1Mpixel (1000x1000) that fits a 10 cm field of view defines a mapping pixels / mm to 10 pixels per mm, so 1 pixel = 0.1 mm. While it is the policy of Nyquest, so the smallest detail is noticeable twice the sampling frequency, on the other hand, we find solace in the mathematics sub-pixels, which identifies a particular with an accuracy greater than that obtained optically . To achieve this there is based on the value of gray level pixel and its neighbors. From the point of view, much depends on algorithmic toolkit used, for example, the IMAQ Vision software from National Instruments is pushed up to 1 / 12 pixel. To achieve this level of precision is essential that the lighting is perfectly stable, so that in practical situations it is difficult to move beyond the ¼ pixel (in our example, 25 m).
Lighting Lighting plays an essential role in the measurement system: in fact, measurement reproducibility is invariably influenced by the repeatability with which he manages to capture the image of departure.
Trivially, we think of entering a dark room, and 'obvious that you will not see' anything, but if you 'turn the light on an incredible amount of information about that room will be immediately perceived. The vision solution to a problem is mainly from the right lighting can enhance or bring out the information you are looking for, proper lighting to the specific case you have an image with inside information that must be analyzed without having to manipulate the image by increasing the time very cycle and ottenedo approximate solutions and little strong. It 'important to remember that the software can' manipulate and help you to obtain the information you are looking at a picture but can not 'miraculously improve an image compromised from the start because of an incorrect packing of the surrounding environment to the scene. It 's important to choose not only the type of light more' suitable environment, application and above the material with which it has to do (LED, fluorescent, IR, UV, colored or white, etc..) but also the mechanical structure that brings the light: Ring Illuminator, Spot, Dome Illuminator, Illuminator Back, etc. which must be carefully chosen depending on the application and information to be obtained. For example, if you need to make dimensional measurements on the perimeters of the illuminator more piece 'will fit' what I creates a strong contrast and that excites me the silhouette of the object boundary so as to easily perform the measurements, here chel ' illuminator more 'will fit' a back illuminator white, able to illuminate the part under test from the back.
measurement software
Once you get the picture, you must perform a series of operations that transform it into a series of numbers, which are precisely the measures taken. The vision software from National Instruments, for example, with over 400 imaging functions, allows to locate the piece in the image, correct the optical distortion and errors of mechanical installation of the camera, locate the edges and the points of measurement, perform interpolations sub-pixels, and more.
The location of the piece is required when the workpiece positioning himself under the camera is less precisely the maximum permissible error.
calibration has the dual purpose of "translating" the shares from pixel to real-world units (ĩm or mm) and to compensate the optical distortion and mounting the camera. There are also functions
already 'prepared for specific purposes and which facilitate the use of macros such as OCR to recognize characters or functions or DataMatrix barcode reading and more.