Monday, December 11, 2017

IVSimaging Blog

rss

Keep up to date on new products, as well as product updates.


Machine Vision System improves Ice Cream Production
 

SYSTEMS INTEGRATION: Vision system improves ice cream production

PatMax vision software.
Running the PatMax vision software, specific features of each bag are identified and a pattern score is computed. If an incorrect bag or quantity is loaded by the operator an alarm is sounded and the conveyor is stopped.

Producing around 120,000 to 150,000 liters of frozen confectionary per day during summer, Tip Top (Auckland, New Zealand; www.tiptop.co.nz) is New Zealand's largest supplier. To produce these ice cream products, exact quantities of various powdered ingredients such as milk powder are blended with wet ingredients to ensure consistent flavor and texture.

Since large amounts of ice-cream products are produced, numerous 20kg bags of different ingredients are used to make a single batch. Because of this, the exact number and types of ingredients must be checked to ensure that the correct type and amount of ingredients are added to each hopper prior to the mixing process.

"In the past," says David Berry, Owner of ControlVision (Auckland, New Zealand; controlvision.co.nz), "the type and quantity of ingredients added was performed manually. To do so, an operator would identify and count the type of products as they were loaded onto a conveyor. Since this process is prone to human error, it can result in inconsistencies in the final batch. Should this occur, then the whole batch may need to be reworked, costing time and money."

Although bag handling, conveyors, powder handling and mixing systems had been previously installed by Powder Projects (Hamilton, New Zealand), the company realized that a vision system was required to eliminate any human error in the ingredient handling process. Since Tip Top had already contracted ControlVision to install a machine vision system to check lid placement on ice cream products, the company was asked to develop a system to verify whether the correct type and amount of ingredients were added to each hopper prior to the mixing process.

bags are inspected
After each bag of ingredients is manually placed on a conveyor belt, the bags are inspected for product type and quantity by the vision system.

In the development of the vision system, ControlVision installed a 1392 x 1040 Scout scA1390-17gm GigE camera from Basler (Ahrensburg, Germany; www.baslerweb.com) above the dry ingredient conveyor. Mounted with a 9mm Fujinon lens from Fujifilm (Wayne, NJ; www.fujifilmusa.com), images of the complete field of view of each bag are captured by the camera and transferred over the GigE interface to a panel mounted industrial PC/touchscreen from Cybersys Integration Technology (City Of Industry, CA). To illuminate the bags as they move through the inspection station, ControlVision mounted two banks of white fluorescent lights above the camera.

"Although each variety of confectionary may only use approximately ten different types of dry powder product, between 50 and 100 different 20kg bags of ingredients may be used to produce the various varieties of ice cream produced by Tip Top," says Berry. "Because of this, the system must be trained to recognize over 100 different bags. To accomplish this, the company employed the PatMax geometric pattern matching tool from Cognex (Natick, MA; www.cognex.com).

This pattern matching tool is embedded in ControlVision's VisionServer, a machine vision framework and application development environment. This allows functions such as PatMax to be added as a graphical function block diagram in a sequence of functions that capture process, display and control machine vision processes (see "Vision framework supports multiple software packages," Vision Systems Design, February 2012, http://bit.ly/ztDEvU). VisionServer also provides the HMI for operators to train the system to learn new bag artwork as required.

Using the PatMax algorithm, salient geometric features of each particular bag of ingredients are analyzed and stored in the system's database. "Because the system employs the PatMax pattern matching tool," says Berry, "features within images of flat or wrinkled bags of ingredients can be easily detected." In operation, the system then compares these features against the stored database and returns a pattern matching score that ranges from 0-1. If an incorrect ingredient is added, this score will be lower than a preset score and an alarm will be sounded. This alarm will also occur should too many or too few bags of a required ingredient be loaded on to the conveyor.

To monitor the complete production process, the embedded PC/flat panel is also interfaced to Tip Top's supervisory control and data acquisition (SCADA) system. In this way, the operator can monitor whether the correct type and amount of ingredients are being added for a particular batch and the status of the batch verification system.

According to Tip Top Project Manager Brett Dockery, since the new system has been installed, ingredient mixing accuracy has improved dramatically. "Because the camera captures exactly what has gone into the mix, it is much easier to correct any mistakes than in the past, when we had difficulty in determining which ingredient was missing or wrongly added," he says.

 

Courtesy of Vision Systems Design website.

 

AV Design Matters

DESIGN MATTERS in AV

AV is fundamentally human-centric, so it should not be a surprise that design and design-led engineering is central to the products being developed by the AV market.  Well, surprise – it isn’t.  Part of the challenge for traditional AV companies is they emerged from such a deep area of science and engineering that they haven’t been able to transform themselves from engineering houses to human-centered product companies.  It’s part of the reason all of my readers have heard of Samsung and Apple, but only some are aware of Barco and AMX, which are both very big companies.  To be fair, I’m having conversations with very large AV companies who are aware they need to change how they approach product development and some of them will be successful.

As software companies enter the AV market, a focus on the user experience is being brought to bear into a problem space that is usually driven first by standards, video codecs and hardware constraints, and last by design. Getting this mix correct is difficult for any company, and I’ll admit, we struggle with getting it right as well. That’s why I found yesterday’s article in Fast Company so interesting. The article was written by Hartmut Esslinger, the founder of Frog Design, Inc. and one of the leading designers in the world. He is recognized as the force that helped put a culture of “design first” at Apple.

hartmut - design first
I’m very familiar with Frog since my brother was a senior UX designer there for many years, and I got to see some of Frog from the inside during the overwhelming and exciting dot-com era (think SXSW parties that today are just not possible). I’d like to think being aware of the design community as a separate-but-equal partner of technology gave us a leg-up on entering the collaboration arena (not to mention hours of consulting time from my brother).

Esslinger makes some very good points.  If you work in the AV space and are responsible for helping set direction for your company in any way, then read the article and ask yourself how your company envisions, designs, and builds products that people love.  The point is quite simple.  Design and user experience should drive products, not engineering. Esslinger further drives this point home in his new book, Design Forward.

As a technologist (not a designer), it took me several years to fully accept this approach, and encouraging others to embrace the methodology is an ongoing process.  Esslinger points out the danger of allowing design considerations to fall or be placed at the mercy of engineering, and I think the rewards for adopting a design-first strategy are huge.  In a market where we are focused on building products to be looked at, listened to, and now interacted with by our customers, the benefits are probably even greater.

 

Courtesy of Mersive Technology Blog & Chris Jaynes

 

"The Basics" AV Design: Part 1 Transmission Line Impedance
 

AV Design "The Basics" Impedance Part I Transmission Line (Characteristic) Impedance 

BY: Sam Davisson, Director of Engineering at SJD Engineering Group

Courtesy of: sdjeg.net

 

Impedance has been the most requested subject since I started the “Basic’s” series. So I thought it about time to address it. Problem is, no one actually mentioned what area of impedance they were confused about and was looking for clarification on. Perhaps some are wondering why when you measure across the center conductor and braid of a 75ohm coax cable you don’t read 75ohms. Or how in the world you would ever know if a connector was a 75ohm connector or one of the 50ohm variety? Does all that mumbo jumbo have anything to do with audio amplifiers and the speaker connected to it.

Every signal input, and every output, has an impedance, this "impedance" represents the relationship between voltage and current which a device is capable of accepting or delivering. Electricity is all about the flow of electrons in wire. "Voltage" is a measure of how hard the electrons are pressing to get through, it’s like water pressure in a pipe. "Current," measured in amps, is a measure of the rate at which the electrons are flowing. It’s like the gallons-per-minute flow in a pipe. Total power delivery, in an electrical circuit, is measured in watts, which are simply the volts multiplied by the amps. A number of watts may represent a very high voltage with relatively low current (such as we see in high-tension power lines) or a low voltage with very high current (such as we see when a 12-volt car battery delivers hundreds of amps into a starter).

An output circuit can’t supply just any combination of voltage and current we want. Instead, it’s designed to deliver a signal into a specific kind of load ("load," here, simply meaning the device, such as the TV input that the signal is being delivered to). The "impedance" of the load represents the opposition to current flow which the load presents.

The impedance of the load is expressed in ohms, and the relationship between the current and the voltage in the circuit is controlled by the impedance in the circuit. When a signal source sees a very low-impedance circuit, it produces a larger than intended current; when it sees a very high-impedance circuit, it produces a smaller current than intended. These mismatched impedance’s redistribute the power in the circuit so that less of it is delivered to the load than the circuit was designed for, because the nature of the circuit is that it can’t simply readjust the voltage to deliver the same power regardless of the rate of current flow. What happens in an impedance mismatch between a source and load; power isn’t being transferred properly because the source circuit wasn’t designed to drive the kind of load it’s connected to. In some electronic applications, this will burn out equipment. A radio transmitter must be able to deliver its power into an antenna load that presents the proper impedance or it will self-destruct, and an audio amplifier can possibly be destroyed by attaching it to speakers of the wrong impedance.

Hopefully that is a rare occurrence. So why do we really care about impedance mismatches? The reason is that when impedances are mismatched, the mismatch causes portions of the signal to reflect. This can happen at the source, at the connectors, at any point along the cable, or at the load and when a portion of the signal bounces backward down the line, it combines with and interferes with the portions of the signal that follow it. This is why, in the case of a impedance mismatch your audio quality suffers. With digital video these reflections can cause a "sparkle" effect in your picture or a complete loss of picture.

So, when I say that the input impedance of HD SDI input jack is 75 ohms, that’s what I mean. But what does it mean to say that the impedance of the cable between the source and display is 75 ohms?

Well, first, it doesn’t mean that the cable itself presents a 75 ohm load. If it did, the total load would now be 150 ohms, and you’d have an impedance mismatch. Furthermore, if the cable itself constituted a 75 ohm load, that load would be dependent on length. So a cable twice as long would be 150 ohms, a cable half as long would be 37.5 ohms, and so on. In case it’s not obvious by now, another thing that it doesn’t mean is that the resistance of the cable will be 75 ohms. Resistance, which also confusingly happens to be measured in ohms, has nothing to do with characteristic impedance, which can’t be measured by using a VOM.

When I say that the characteristic impedance of a cable is 75ohms, or 50, 110, 300, or what-have-you, what I mean is that if we attach a load of the specified impedance to the other end of the cable, it will look like a load of that impedance regardless of the length of the cable. The object of a 75 ohm cable is simply to "carry" that 75 ohm impedance from point A to point B, so that as far as the devices are concerned, they’re right next to one another. If we take a hundred feet of 300-ohm television twin-lead cable, solder it to RCA connectors, and stick that in between the display and an an analog device, the load, as "seen" by the analog device, will not be 75 ohms. How bad the mismatch is, and what the consequences of it are, will depend on a variety of factors, but it’s fair to say that this sort of mismatch needs to be avoided.

Transmission line impedance is critical in some applications, and not so critical in others. In analog (line level) audio, impedance has become a non-factor as designers of these circuits dispensed with the idea of matched impedance’s completely and use what is called voltage matching instead.

The idea here is to engineer the equipment to have the lowest possible output impedance and a relatively high input impedance. The difference between them must be at least a factor of ten, and is often much more. Modern equipment typically employs output impedance’s of around 150ohms or below, with input impedance’s of at least 10kohms or above. With the minuscule output impedance and relatively high input impedance, the full output voltage should be developed across the input impedance.

Relatively high-impedance inputs such as these are called bridging inputs. They have the advantage that several devices can be connected in parallel without decreasing the impedance to any significant degree. The voltage developed across each input remains high and the source does not need to supply a high current. As an example, a mixing console output is feeding two tape machines. Each machine now has an input impedance of 30kohms. Connecting two in parallel will only reduce the combined input impedance to 15kohms, which is still substantially higher than the 150ohm output impedance of the console. Therefore the input voltage will be virtually unaffected. I calculate a loss of 0.04dB. Even connecting a third device to the output, the impedance would only fall to 10kohms and the level would fall by a further 0.05dB, which would not be audible. Because bridging inputs make studio work much easier, the idea of voltage matching is now employed universally in line-level audio equipment, irrespective of the actual reference signal levels used.

Back on topic now, the behavior of cables changes as signal frequencies increase. This is so because as frequency increases, the electrical "wavelength" of a signal becomes shorter and shorter. As the length of a cable becomes closer to a large fraction of the electrical wavelength of the signal it carries, the likelihood of significant reflections from impedance mismatch increases. The whole cable can resonate at the wavelength of the signal, or of a portion of the signal, and the impact on signal quality will be anything but good. Many signals are complex, occupying not a single frequency, but a whole range of frequencies. This is why we so often speak of the "bandwidth" of a signal, and so a mismatch will affect different parts of the signal differently.

Because the effects of impedance mismatch are dependent upon frequency, the issue has particular relevance for digital signals. Where analog audio or video signals consist of electrical waves which rise or fall continuously through a range, digital signals are very different. They switch rapidly between two states representing bits, 1 and 0. This switching creates something close to what we call a "square wave,", a waveform which, instead of being sloped like a sine wave, has sharp, sudden transitions. Although a digital signal can be said to have a "frequency" at the rate at which it switches, electrically, a square wave of a given frequency is equivalent to a sine wave at that frequency accompanied by an infinite series of harmonics, that is, multiples of the frequency. If all of these harmonics aren’t faithfully carried through the cable and, in fact, it’s physically impossible to carry all of them faithfully, then the "shoulders" of the digital square wave begin to round off. The more the wave becomes rounded, the higher the possibility of bit errors becomes. The device at the load end will, of course, reconstitute the digital information from this somewhat rounded wave, but as the rounding becomes worse and worse, eventually there comes a point where the errors are too severe to be corrected, and the signal can no longer be reconstituted. The best defense against the problem is, of course, a cable of the right impedance: for digital video or SPDIF digital audio, this means a 75 ohm cable like Belden 1694A; for AES/EBU balanced digital audio, this means a 110 ohm cable like Belden 1800F.

Fortunately, for most applications, it’s very easy to choose the right impedance cable. All common analog video standards and HD SDI use 75ohm cable, as do coaxial (unbalanced) digital audio connections. If you have balanced AES/EBU type digital audio lines, you’ll want 110 ohm AES/EBU cable. There are a few others you may bump into, however, and it’s good to be aware of them. RG-58 coax, such as is often used for CB or ham radio antenna lines and CATV, is 50 ohms, not suitable for video use. Twin-lead cable, the two wires separated by a band of insulation that used to be the most common way to hook up a TV antenna is a 300 ohm balanced line.

Connectors have impedance, too, and should be matched to the cable and equipment. Many BNC connectors, especially on older cables, are 50ohm types, and so it’s important to be sure that you’re using 75 ohm BNCs when connecting video lines. RCA connectors can’t quite meet the 75 ohm impedance standard because their physical dimensions just aren’t fully compatible with it, but there are RCA plugs which are designed for the best possible impedance match with 75 ohm cable and equipment.

Coming Impedance Part II, Speakers, Amplifiers and Nominal Impedance

 

Augmented vision system aids the sight impaired
 

While there is a common misconception that a blind person has no sight whatsoever, a person can be considered blind if they must use alternative methods to engage in any activity that persons with normal vision would perform using their eyes. Indeed, while approximately 30,000 people in the UK have no sight whatsoever, others suffering from such diseases as age related macular degeneration (AMD), glaucoma and retinopathy may also be considered legally blind.

Each of these diseases affects a person's ability to visualize the world around them in different ways. While AMD gradually destroys the macula, the part of the eye that provides sharp, central vision needed for seeing objects clearly, people with glaucoma slowly lose their peripheral vision. Those with damage to the retina of the eye (retinopathy) lose their sight across their field of view (Figure 1).

<strong>Figure 1: </strong>
Figure 1: Age related macular degeneration gradually destroys the part of the eye that provides sharp, central vision while people with glaucoma slowly lose their peripheral vision. Those with damage to the retina (retinopathy) lose their sight across their field of view.

In the past, people suffering from these conditions relied on guide dogs and white canes to assist them. Today, however, researchers are aiming to develop novel methods such as retinal implants and augmented vision systems to increase the visual acuity of blind people.

Retinal implants

In diseases such as retinitis pigmentosa (RP), a large part of the retina remains functional even after a person loses their sight. Although the rods and cones that convert light into nerve signals are destroyed by this disease, most of the retinal nerve tissue remains intact. Because of this, sub-retinal implants can be used to convert light into electrical signals to stimulate the retinal nerve tissue.

This is the concept behind a number of retinal implants such as the one developed by Retina Implant AG (Reutlingen, Germany; http://retina-implant.de). The company's Alpha IMS is a sub-retinal implant consists of an IC of approximately 3 × 3 mm in size and 70-μm thick and with 1500 individual pixels. Each of these pixels contains a light-sensitive photodiode, a logarithmic differential amplifier, and a 50 × 50-μm iridium electrode into which the electrical stimuli at the retina are guided (Figure 2).

<strong>Figure 2:</strong>
Figure 2: Alpha IMS is a sub-retinal implant consists of an IC of approximately 3 × 3 mm in size and 70-μm thick and with 1500 individual pixels.

The IC is positioned on a thin, flexible circuit board of polyimide that is connected to a thin, coiled cable that passes through the orbital cavity to the bone of the temple and from there to a point behind the ear, where it is connected to a power supply. Electrical energy is received inductively from the outside through a second coil located on the skin.

Because the electrical excitation invariably involves a number of cells, patients with these implants cannot visualize objects sharply, but are nevertheless able to locate light sources and localize physical objects.

Assisted vision

While such retinal prosthetics somewhat alleviate conditions such as retinitis pigmentosa, Dr. Stephen Hicks at the Nuffield Department of Clinical Neurosciences at University of Oxford (Oxford, England; http://bit.ly/1624Mkc) developed a system to look at ways of improving the image on the low-res implanted devices (http://bit.ly/151VEMv).

The system, known as a Retinal Prosthetic Simulator, first captures an image of a scene and then, by preprocessing the image extracts salient features within it that are then down-sampled and presented visually to the user through a pair of eyeglasses. In the initial prototype, these visual images were acquired using a 752 × 480 pixel Firefly MV FireWire camera from Point Grey (Richmond, BC, Canada; www.ptgrey.com) at a frame rate of 60 fps. Attached to a Z800 3Dvisor head-mounted display from eMagin (Bellevue, WA; www.emagin.com), the camera acquires the scene in front of the subject and these images are transferred to a PC over the FireWire Interface.

At the same time, horizontal and vertical eye positions were acquired at 250 Hz using a JAZZ-novo eye tracker from Ober Consulting (Poznan, Poland; www.ober-consulting.com) worn under the head-mounted display. Data from the eye tracker was then used in conjunction with the visual data to determine the image displayed on the head-mounted display. Before such data could be displayed, however, the captured image was first converted to greyscale and down-sampled to a 30 x 30 image using LabView Vision from National Instruments (Austin, TX, USA; www.ni.com). In this way, features appearing in the image appear sharper and thus more apparent to the user.

Despite this, Dr. Hicks and his colleagues realized that the system was rather cumbersome and difficult to use. Furthermore, the system was incapable of providing any depth perception, making it difficult for the user to judge how far away any specific object or obstacle might be. Because of this, it was decided that a system that provided the ability to locate nearby objects and allow obstacle avoidance at walking speed would be more useful.

Depth perception

To develop this system, an infra-red depth camera from Primesense (Tel Aviv, Israel; www.primesense.com) was mounted on the bridge of a head mounted display and data from the camera transferred over a USB interface to a portable computer. By projecting an array of infrared points onto nearby surfaces and analyzing the returned displacement data, a depth map can then be created that indicates the position of nearby objects.

Using LabView Vision from NI, this depth map was then transformed into a viewable image by converting the distance information into brightness so that closer objects appear brighter. This information was then down sampled and displayed on an RGB LED an array of 24 × 8 color LEDs (Figure 3). To diffuse these individual points of light, a semi-transparent film was inserted in front of the LEDs.

<strong>Figure 3:</strong>
Figure 3: Depth map is transformed into a viewable image by converting the distance information into brightness so that closer objects appear brighter.

To evaluate the effectiveness of this design, sight impaired individuals were shown to quickly and accurately detect people at a distance of up to 4 m. After a short period of using the system, almost all could recognize nearby objects such as walls, chairs and their own limbs.

Residual information

While effective, however, Dr. Hicks and his colleagues recognized that one of the limitations of an opaque display would prevent the wearer from using any of their remaining sight to see the outside world. In age related macular degeneration, for example, a person may still perceive information from their peripheral visual field. Combining this information with stereo depth information would then allow a visually impaired person to more accurately determine information about their surroundings.

Because of this, it was decided to replace the LED display used in the previous design with a transparent organic light emitting display (OLED) from 4D Systems (New South Wales, Australia; www.4dsystems.com.au). Once again, an infra-red depth camera from Primesense was used to capture information and distance information transformed into brightness so that closer objects appear brighter. By displaying the generated image on two transparent 320 x 240 pixel displays mounted in a pair of glasses, however, the user is presented with depth information and a visual image (Figure 4).

<strong>Figure 4:</strong>
Figure 4: Dr. Stephen Hicks demonstrates his latest visual prosthetic eyeglasses that use transparent displays to present the user with depth information and a visual image.

Augmenting the capability of the display is just one area where such systems will be improved. Indeed, one of the leading desires of the visually impaired is the wish to read. With this in mind, Dr. Hicks and his colleagues have also demonstrated systems using Point Grey cameras and a text to speech synthesizer software from Ivona Software (Gdynia, Poland; www.ivona.com) that can translate on-line text to speech.

At present such visual aids for the sight impaired are still bulky, requiring host computer processing and 3D camera peripherals. As such systems evolve, however, it may soon be possible to integrate such peripherals into a single pair of glasses that can be produced at low-cost. With such advances, the hundreds of thousands of people that now suffer from such diseases as age related macular degeneration (AMD), glaucoma and retinopathy will be given expanded visual capabilities.

 

Courtesy of Vision Systems Design

 

How to use dynamic range to compare cameras
 
 

The capability of a machine vision camera to capture the details of a scene is defined by several parameters with dynamic range at the top of the list.  High contrast images require a high dynamic range.  One problem is there can be different ways to calculate dynamic range, which makes it difficult to compare cameras and sensors on paper.  Also, dynamic range and the signal to noise ratio (SNR) are sometimes considered interchangeable for CCD and CMOS image sensors and cameras providing further confusion. 

Dynamic range is the ratio between the maximum output signal level and the noise floor at minimum signal amplification (noise floor which is the RMS (root mean square) noise level in a black image).  The noise floor of the camera contains sensor readout noise, camera processing noise and the dark current shot noise.  Dynamic range represents the camera’s ability to display/reproduce the brightest and darkest portions of the image and how many variations in between. This is technically intra-scene dynamic range.  Within one image there may be a portion that is in complete black and a portion that is completely saturated.

dynamic range cameras    

It is expressed in dBs (decibels).  The largest possible signal is directly proportional to the full well capacity of the pixel where the full well capacity is the maximum number of electrons per pixel.  Therefore, dynamic range is the ratio of the full well capacity and the noise floor. 

This should be clarified as the “useful full well capacity.”  Camera designers cannot always use the full well capacity of the image sensor if they want to maintain linearity also not all pixels saturate at the same level.  If the full well capacity of the sensor is used instead of the usable full well value, this can provide an artificially high dynamic range specification and is something to be aware of.

Dynamic range is not equal to digitization level such that a camera with a 12-bit A/D converter does not necessarily have 12 bits of dynamic range because this does not consider the noise. The causality is reversed; if a camera has 12-bit of dynamic range that the A/D converters need to be at least 12 bits as well and preferably higher. 

Dynamic range is also not the same as signal to noise ratio even though they are both expressed in dBs.  The signal to noise ratio is simply the ratio of the signal level to the noise level.  Since the absolute noise level depends on the average charge and the PRNU (photo response non-uniformity) on the signal level so does the SNR.  Therefore, SNR is the ratio of the average signal level to the rms noise level. 

snr digital cameras 

The SNR for higher signal levels is dominated by shot noise.  The maximum SNR is obtained at full-scale output. Note that this SNR cannot be measured in practice since the noise becomes clipped near full-scale output.

Dynamic range can be measured and calculated using the photon transfer curve if desired.  For more information on the Photon Transfer Curve, click here. 

Dynamic range provides a much more useful indication (compared to SNR) regarding the ability of the camera to provide the desired image details.  When comparing dynamic range values from different cameras, be sure to verify they were measured under the same conditions.

 

Courtesy of Adimec Blog

 


IVS Imaging is a distributor & manufacturer of machine vision cameras, lenses, cabling, monitors, filters, interface boards & more. IVS is your one stop shop for all your vision needs. IVS Imaging is known across the USA for carrying imaging products from leading manufactures, including Sony Cameras and Accessories, Basler Industrial Cameras, Hitachi Surveillance Cameras, Toshiba Network-based IP Cameras, and Sentech Advanced Digital and OEM cameras. Contact IVS Imaging for all your imaging products, parts, and accessories needs.

About|Contact Us|Customer Service|IVS Blog|IVS TV | Terms Of Use | Privacy Statement
Copyright 2017 by IVS imaging • 101 Wrangler Suite 201, Coppell, Texas 75019 - (888) 446-1301