Monday, December 11, 2017

IVSimaging Blog

rss

Keep up to date on new products, as well as product updates.


Augmented vision system aids the sight impaired
 

While there is a common misconception that a blind person has no sight whatsoever, a person can be considered blind if they must use alternative methods to engage in any activity that persons with normal vision would perform using their eyes. Indeed, while approximately 30,000 people in the UK have no sight whatsoever, others suffering from such diseases as age related macular degeneration (AMD), glaucoma and retinopathy may also be considered legally blind.

Each of these diseases affects a person's ability to visualize the world around them in different ways. While AMD gradually destroys the macula, the part of the eye that provides sharp, central vision needed for seeing objects clearly, people with glaucoma slowly lose their peripheral vision. Those with damage to the retina of the eye (retinopathy) lose their sight across their field of view (Figure 1).

<strong>Figure 1: </strong>
Figure 1: Age related macular degeneration gradually destroys the part of the eye that provides sharp, central vision while people with glaucoma slowly lose their peripheral vision. Those with damage to the retina (retinopathy) lose their sight across their field of view.

In the past, people suffering from these conditions relied on guide dogs and white canes to assist them. Today, however, researchers are aiming to develop novel methods such as retinal implants and augmented vision systems to increase the visual acuity of blind people.

Retinal implants

In diseases such as retinitis pigmentosa (RP), a large part of the retina remains functional even after a person loses their sight. Although the rods and cones that convert light into nerve signals are destroyed by this disease, most of the retinal nerve tissue remains intact. Because of this, sub-retinal implants can be used to convert light into electrical signals to stimulate the retinal nerve tissue.

This is the concept behind a number of retinal implants such as the one developed by Retina Implant AG (Reutlingen, Germany; http://retina-implant.de). The company's Alpha IMS is a sub-retinal implant consists of an IC of approximately 3 × 3 mm in size and 70-μm thick and with 1500 individual pixels. Each of these pixels contains a light-sensitive photodiode, a logarithmic differential amplifier, and a 50 × 50-μm iridium electrode into which the electrical stimuli at the retina are guided (Figure 2).

<strong>Figure 2:</strong>
Figure 2: Alpha IMS is a sub-retinal implant consists of an IC of approximately 3 × 3 mm in size and 70-μm thick and with 1500 individual pixels.

The IC is positioned on a thin, flexible circuit board of polyimide that is connected to a thin, coiled cable that passes through the orbital cavity to the bone of the temple and from there to a point behind the ear, where it is connected to a power supply. Electrical energy is received inductively from the outside through a second coil located on the skin.

Because the electrical excitation invariably involves a number of cells, patients with these implants cannot visualize objects sharply, but are nevertheless able to locate light sources and localize physical objects.

Assisted vision

While such retinal prosthetics somewhat alleviate conditions such as retinitis pigmentosa, Dr. Stephen Hicks at the Nuffield Department of Clinical Neurosciences at University of Oxford (Oxford, England; http://bit.ly/1624Mkc) developed a system to look at ways of improving the image on the low-res implanted devices (http://bit.ly/151VEMv).

The system, known as a Retinal Prosthetic Simulator, first captures an image of a scene and then, by preprocessing the image extracts salient features within it that are then down-sampled and presented visually to the user through a pair of eyeglasses. In the initial prototype, these visual images were acquired using a 752 × 480 pixel Firefly MV FireWire camera from Point Grey (Richmond, BC, Canada; www.ptgrey.com) at a frame rate of 60 fps. Attached to a Z800 3Dvisor head-mounted display from eMagin (Bellevue, WA; www.emagin.com), the camera acquires the scene in front of the subject and these images are transferred to a PC over the FireWire Interface.

At the same time, horizontal and vertical eye positions were acquired at 250 Hz using a JAZZ-novo eye tracker from Ober Consulting (Poznan, Poland; www.ober-consulting.com) worn under the head-mounted display. Data from the eye tracker was then used in conjunction with the visual data to determine the image displayed on the head-mounted display. Before such data could be displayed, however, the captured image was first converted to greyscale and down-sampled to a 30 x 30 image using LabView Vision from National Instruments (Austin, TX, USA; www.ni.com). In this way, features appearing in the image appear sharper and thus more apparent to the user.

Despite this, Dr. Hicks and his colleagues realized that the system was rather cumbersome and difficult to use. Furthermore, the system was incapable of providing any depth perception, making it difficult for the user to judge how far away any specific object or obstacle might be. Because of this, it was decided that a system that provided the ability to locate nearby objects and allow obstacle avoidance at walking speed would be more useful.

Depth perception

To develop this system, an infra-red depth camera from Primesense (Tel Aviv, Israel; www.primesense.com) was mounted on the bridge of a head mounted display and data from the camera transferred over a USB interface to a portable computer. By projecting an array of infrared points onto nearby surfaces and analyzing the returned displacement data, a depth map can then be created that indicates the position of nearby objects.

Using LabView Vision from NI, this depth map was then transformed into a viewable image by converting the distance information into brightness so that closer objects appear brighter. This information was then down sampled and displayed on an RGB LED an array of 24 × 8 color LEDs (Figure 3). To diffuse these individual points of light, a semi-transparent film was inserted in front of the LEDs.

<strong>Figure 3:</strong>
Figure 3: Depth map is transformed into a viewable image by converting the distance information into brightness so that closer objects appear brighter.

To evaluate the effectiveness of this design, sight impaired individuals were shown to quickly and accurately detect people at a distance of up to 4 m. After a short period of using the system, almost all could recognize nearby objects such as walls, chairs and their own limbs.

Residual information

While effective, however, Dr. Hicks and his colleagues recognized that one of the limitations of an opaque display would prevent the wearer from using any of their remaining sight to see the outside world. In age related macular degeneration, for example, a person may still perceive information from their peripheral visual field. Combining this information with stereo depth information would then allow a visually impaired person to more accurately determine information about their surroundings.

Because of this, it was decided to replace the LED display used in the previous design with a transparent organic light emitting display (OLED) from 4D Systems (New South Wales, Australia; www.4dsystems.com.au). Once again, an infra-red depth camera from Primesense was used to capture information and distance information transformed into brightness so that closer objects appear brighter. By displaying the generated image on two transparent 320 x 240 pixel displays mounted in a pair of glasses, however, the user is presented with depth information and a visual image (Figure 4).

<strong>Figure 4:</strong>
Figure 4: Dr. Stephen Hicks demonstrates his latest visual prosthetic eyeglasses that use transparent displays to present the user with depth information and a visual image.

Augmenting the capability of the display is just one area where such systems will be improved. Indeed, one of the leading desires of the visually impaired is the wish to read. With this in mind, Dr. Hicks and his colleagues have also demonstrated systems using Point Grey cameras and a text to speech synthesizer software from Ivona Software (Gdynia, Poland; www.ivona.com) that can translate on-line text to speech.

At present such visual aids for the sight impaired are still bulky, requiring host computer processing and 3D camera peripherals. As such systems evolve, however, it may soon be possible to integrate such peripherals into a single pair of glasses that can be produced at low-cost. With such advances, the hundreds of thousands of people that now suffer from such diseases as age related macular degeneration (AMD), glaucoma and retinopathy will be given expanded visual capabilities.

 

Courtesy of Vision Systems Design

 

How to use dynamic range to compare cameras
 
 

The capability of a machine vision camera to capture the details of a scene is defined by several parameters with dynamic range at the top of the list.  High contrast images require a high dynamic range.  One problem is there can be different ways to calculate dynamic range, which makes it difficult to compare cameras and sensors on paper.  Also, dynamic range and the signal to noise ratio (SNR) are sometimes considered interchangeable for CCD and CMOS image sensors and cameras providing further confusion. 

Dynamic range is the ratio between the maximum output signal level and the noise floor at minimum signal amplification (noise floor which is the RMS (root mean square) noise level in a black image).  The noise floor of the camera contains sensor readout noise, camera processing noise and the dark current shot noise.  Dynamic range represents the camera’s ability to display/reproduce the brightest and darkest portions of the image and how many variations in between. This is technically intra-scene dynamic range.  Within one image there may be a portion that is in complete black and a portion that is completely saturated.

dynamic range cameras    

It is expressed in dBs (decibels).  The largest possible signal is directly proportional to the full well capacity of the pixel where the full well capacity is the maximum number of electrons per pixel.  Therefore, dynamic range is the ratio of the full well capacity and the noise floor. 

This should be clarified as the “useful full well capacity.”  Camera designers cannot always use the full well capacity of the image sensor if they want to maintain linearity also not all pixels saturate at the same level.  If the full well capacity of the sensor is used instead of the usable full well value, this can provide an artificially high dynamic range specification and is something to be aware of.

Dynamic range is not equal to digitization level such that a camera with a 12-bit A/D converter does not necessarily have 12 bits of dynamic range because this does not consider the noise. The causality is reversed; if a camera has 12-bit of dynamic range that the A/D converters need to be at least 12 bits as well and preferably higher. 

Dynamic range is also not the same as signal to noise ratio even though they are both expressed in dBs.  The signal to noise ratio is simply the ratio of the signal level to the noise level.  Since the absolute noise level depends on the average charge and the PRNU (photo response non-uniformity) on the signal level so does the SNR.  Therefore, SNR is the ratio of the average signal level to the rms noise level. 

snr digital cameras 

The SNR for higher signal levels is dominated by shot noise.  The maximum SNR is obtained at full-scale output. Note that this SNR cannot be measured in practice since the noise becomes clipped near full-scale output.

Dynamic range can be measured and calculated using the photon transfer curve if desired.  For more information on the Photon Transfer Curve, click here. 

Dynamic range provides a much more useful indication (compared to SNR) regarding the ability of the camera to provide the desired image details.  When comparing dynamic range values from different cameras, be sure to verify they were measured under the same conditions.

 

Courtesy of Adimec Blog

 

Using 3D Machine Vision to prevent defects in seals & Sealing devices
 

By Robert Bock  - Chief Scientist, Lumenec Corporation

 


3D MACHINE VISION SEAL INSPECTION SYSTEMS

Seal manufacturers are seeking to eliminate manual inspection and improve defect detection by investing in 3D Machine Vision Seal Inspection Systems. 3D Machine Vision Seal Inspection Systems can be used for improved process control and for satisfying customer inspection requirements:

- Find defective products before they are shipped

- Reduce returns and liability

- Boost production speed

- Reduce labor cost and scrap

- Improve product quality and upstream processes

- Preserve customer goodwill

- Eliminate manual inspection

WHAT ARE THE REQUIREMENTS OF SEAL INSPECTION?

1. Seal Surfaces
All surfaces on the air-side, fluid side and auxiliary lip must be defect free. The contact point on the primary lip is a critical area of concern for seal manufacturers. In addition, the OD of the seal must be inspected for defects. In many cases, the fluid-side lip must be pulled down during inspection so that the spring groove area can also be inspected. Also, it is often critical to verify that the spring groove is fully seated and not damaged.

2. Loading/Unloading and Changeover
Both manual-fed machines and fully automatic systems are needed. In many cases, seal manufacturers are investing in manual-fed machines that can be upgraded to a fully automated system in the future. In addition, a machine vision seal inspection system must be able to accommodate seamless changeover between products. Many manufacturers have 20+ products on a single line.

3. Measurement Tolerances
Seal inspection requires very tight tolerances. Defects that manifest greater than 20 microns upwards/depth and 0.2 mm in width/length must be detectable.

4. Cycle Time
In most cases, loading, inspection, and unloading must be performed in less than 5 seconds, which leaves approximately 1.5 seconds for the machine vision inspection of all critical surfaces.

WHAT ARE THE LIMITATIONS OF CURRENT SEAL INSPECTION TECHNIQUES?

Traditional leak down tests have their limitations. Pumping features on the seal that help move oil away from the contact area can cause problems with leak down tests. In addition, traditional 2D machine vision cannot be used to inspect all the critical surfaces. Typical defects on seals are extremely low-contrast and perfect lighting conditions are required in order to find all the defects on all surfaces. In order to make capital expenditure justification, one must remove manual inspection entirely. As a result, the capex justification cannot be made for a 2D seal inspection system.

BENEFITS OF 3D MACHINE VISION FOR SEAL INSPECTION

3D Machine Vision is ideal for the automated inspection of seals and sealing devices. Since 3D Machine Vision does not depend on complex lighting it can be used to inspect all critical surfaces for low-contrast defects. As a result, 3D Machine Vision eliminates manual inspection. In addition, a single 3D Machine Vision system can accommodate many different seals produced on a given line. Furthermore, since 3D Machine Vision operates at extremely high speeds and high resolutions, 3D Machine Vision Seal Inspection systems can meet the tight tolerances and cycle time requirements of seal manufacturers.

 

Courtesy of Vision Systems Design 



IVS Imaging is a distributor & manufacturer of machine vision cameras, lenses, cabling, monitors, filters, interface boards & more. IVS is your one stop shop for all your vision needs. IVS Imaging is known across the USA for carrying imaging products from leading manufactures, including Sony Cameras and Accessories, Basler Industrial Cameras, Hitachi Surveillance Cameras, Toshiba Network-based IP Cameras, and Sentech Advanced Digital and OEM cameras. Contact IVS Imaging for all your imaging products, parts, and accessories needs.

About|Contact Us|Customer Service|IVS Blog|IVS TV | Terms Of Use | Privacy Statement
Copyright 2017 by IVS imaging • 101 Wrangler Suite 201, Coppell, Texas 75019 - (888) 446-1301