Saturday, August 18, 2018

IVSimaging Blog


Keep up to date on new products, as well as product updates.

HD Serial digital interface (SDI) Signal

Serial digital interface (SDI) is a family of digital video interfaces first standardized by SMPTE (The Society of Motion Picture and Television Engineers) in 1989.[1][2] For example, ITU-R BT.656 and SMPTE 259M define digital video interfacesused for broadcast-grade video. A related standard, known as high-definition serial digital interface (HD-SDI), is standardized in SMPTE 292M; this provides a nominal data rate of 1.485 Gbit/s.[3]

Additional SDI standards have been introduced to support increasing video resolutions (HD, UHD and beyond), frame rates, stereoscopic (3D) video, and color depth. Dual link HD-SDI consists of a pair of SMPTE 292M links, standardized bySMPTE 372M in 1998;[2] this provides a nominal 2.970 Gbit/s interface used in applications (such as digital cinema or HDTV 1080P) that require greater fidelity and resolution than standard HDTV can provide. 3G-SDI (standardized in SMPTE 424M) consists of a single 2.970 Gbit/s serial link that allows replacing dual link HD-SDI. As of August 2014, 6G-SDI and 12G-SDIproducts are already in the market, although their corresponding standards are still in proposal phase.[4]

These standards are used for transmission of uncompressed, unencrypted digital video signals (optionally including embedded audio and time code) within television facilities; they can also be used for packetized data. Coaxial variants of the specification range in length but are typically less than 300 meters. Fiber optic variants of the specification such as 297M allow for long-distance transmission limited only by maximum fiber length and/or repeaters. SDI and HD-SDI are usually only available in professional video equipment because

various licensing agreements restrict the use of unencrypted digital interfaces, such as SDI, prohibiting their use in consumer equipment.

Several professional video and HD-video capable DSLR cameras and all uncompressed video capable consumer cameras use

the HDMI interface, often called Clean HDMI. There are various mod kits for existing DVD players and other devices, which allow a user to add a serial digital interface to these devices.


Electrical interface


The various serial digital interface standards all use (one or more) coaxial cables with BNC connectors, with a nominal impedance of

75 ohms. This is the same type of cable used in analog video setups, which potentially makes for easier upgrades (though higher quality cables may be necessary for long runs at the higher bitrates). The specified signal amplitude at the source is 800 mV (±10%) peak-to-peak; far lower voltages may be measured at the receiver owing to attenuation. Using equalisation at the receiver, it is possible to send



270 Mbit/s SDI over 300 metres without use of repeaters, but shorter lengths are preferred. The HD bitrates have a shorter maximum run length, typically 100 meters.[5][6]

Uncompressed digital component signals are transmitted. Data is encoded in NRZI format, and a linear feedback shift register is used to scramble the data to reduce the likelihood that long strings of zeroes or ones will be present on the interface. The interface is self- synchronizing and self-clocking. Framing is done by detection of a special synchronization pattern, which appears on the (unscrambled)

serial digital signal to be a sequence of ten ones followed by twenty zeroes (twenty ones followed by forty zeroes in HD); this bit pattern is not legal anywhere else within the data payload.















Example video formats








270 Mbit/s, 360 Mbit/s, 143 Mbit/s, and 177 Mbit/s


480i, 576i






540 Mbit/s


480p, 576p








1.485 Gbit/s, and 1.485/1.001 Gbit/s


720p, 1080i




Dual Link HD-SDI




2.970 Gbit/s, and 2.970/1.001 Gbit/s










2.970 Gbit/s, and 2.970/1.001 Gbit/s





SMPTE ST-2081*




6 Gbit/s




SMPTE ST-2082*




12 Gbit/s





From Wikipedia, the free encyclopedia

(Redirected from NRZI)



This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations. (January 2013)



The binary signal is encoded using rectangular pulse amplitude modulation with polar non-return-to-zero code


In telecommunication, a non-return-to-zero (NRZ) line code is a binary code in which ones are represented by onesignificant condition, usually a positive voltage, while zeros are represented by some other significant condition, usually a negative voltage, with no other neutral or rest condition. The pulses in NRZ have more energy than a return-to-zero(RZ) code, which also has an additional rest state beside the conditions for ones and zeros. NRZ is not inherently aself-clocking signal, so some additional synchronization technique must be used for avoiding bit slips; examples of such techniques are a run length limited constraint and a parallel synchronization signal.


For a given data signaling rate, i.e., bit rate, the NRZ code requires only half the baseband bandwidth required by theManchester code (the passband bandwidth is the same). When used to represent data in an asynchronous communication scheme, the absence of a neutral state requires other mechanisms for bit synchronization when a separate clock signal is not available.

NRZ-Level itself is not a synchronous system but rather an encoding that can be used in either a synchronous or asynchronous transmission environment, that is, with or without an explicit clock signal involved. Because of this, it is not strictly necessary to discuss how the NRZ-Level encoding acts "on a clock edge" or "during a clock cycle" since all transitions happen in the given amount of time representing the actual or implied integral clock cycle. The real question is that of sampling—the high or low state will be received correctly provided the transmission line has stabilized for that bit when the physical line level is sampled at the receiving end.

However, it is helpful to see NRZ transitions as happening on the trailing (falling) clock edge in order to compare NRZ-Level to other encoding methods, such as the mentioned Manchester code, which requires clock edge information (is the XOR of the clock and NRZ, actually) see the difference between NRZ-Mark and NRZ-Inverted.


Bit rates

Several bit rates are used in serial digital video Signal:



Data format


In SD and ED applications, the serial data format is defined to 10 bits wide, whereas in HD applications, it is 20 bits wide, divided into two parallel 10-bit datastreams (known as Y and C). The SD datastream is arranged like this:

Cb Y Cr Y' Cb Y Cr Y'


whereas the HD datastreams are arranged like this:



Y Y' Y Y' Y Y' Y Y'


Cb Cr Cb Cr Cb Cr Cb Cr


For all serial digital interfaces (excluding the obsolete composite encodings), the native color encoding is 4:2:2 YCbCr format. The luminance channel (Y) is encoded at full bandwidth (13.5 MHz in 270 Mbit/s SD, ~75 MHz in HD), and the two chrominance channels (Cb and Cr) are subsampled horizontally, and encoded at half bandwidth (6.75 MHz or 37.5 MHz). The Y, Cr, and Cb samples are co-sited (acquired at the same instance in time), and the Y' sample is acquired at the time halfway between two adjacent Y samples.

In the above, Y refers to luminance samples, and C to chrominance samples. Cr and Cb further refer to the red and blue "color difference" channels; seeComponent Video for more information. This section only discusses the native color encoding of SDI;


other color encodings are possible by treating the interface as a generic 10-bit data channel. The use of other colorimetry encodings, and the conversion to and from RGB colorspace, is discussed below.

Video payload (as well as ancillary data payload) may use any 10-bit word in the range 4 to 1,019 (00416 to 3FB16) inclusive; the values 0–3 and 1,020–1,023 (3FC163FF16) are reserved and may not appear anywhere in the payload. These reserved words have two purposes; they are used both for Synchronization packets and for Ancillary data headers.

Synchronization packets


A synchronization packet (commonly known as the timing reference signal or TRS) occurs immediately before the first active sample on every line, and immediately after the last active sample (and before the start of the horizontal blanking region). The synchronization packet consists of four 10-bit words, the first three words are always the same—0x3FF, 0, 0; the fourth consists of 3 flag bits, along with an error correcting code. As a result, there are 8 different synchronization packets possible.

In the HD-SDI and dual link interfaces, synchronization packets must occur simultaneously in both the Y and C datastreams. (Some delay between the two cables in a dual link interface is permissible; equipment which supports dual link is expected to buffer the leading link in order to allow the other link to catch up). In SD-SDI and enhanced definition interfaces, there is only one datastream, and thus only one synchronization packet at a time. Other than the issue of how many packets appear, their format is the same in all versions of the serial-digital interface.

The flags bits found in the fourth word (commonly known as the XYZ word) are known as H, F, and V. The H bit indicates the start of horizontal blank; and synchronization bits immediately preceding the horizontal blanking region must have H set to one. Such packets are commonly referred to as End of Active Video, or EAV packets. Likewise, the packet appearing immediately before the start of the active video has H set to 0; this is the Start of Active Video or SAV packet.

Likewise, the V bit is used to indicate the start of the vertical blanking region; an EAV packet with V=1 indicates the following line (lines are deemed to start at EAV) is part of the vertical interval, an EAV packet with V=0 indicates the following line is part of the active picture.

The F bit is used in interlaced and segmented-frame formats to indicate whether the line comes from the first or second field (or segment). In progressive scanformats, the F bit is always set to zero.

Line counter and CRC


In the high definition serial digital interface (and in dual-link HD), additional check words are provided to increase the robustness of the interface. In these formats, the four samples immediately following the EAV packets (but not the SAV packets) contain a cyclic redundancy check field, and a line count indicator. The CRC field provides a CRC of the preceding line (CRCs are computed independently for the Y and C streams), and can be used to detect bit errors in the interface. The line count field indicates the line number of the current line.

The CRC and line counts are not provided in the SD and ED interfaces. Instead, a special ancillary data packet known as an EDH packet may be optionally used to provide a CRC check on the data.

Line and sample numbering


Each sample within a given datastream is assigned a unique line and sample number. In all formats, the first sample immediately following the SAV packet is assigned sample number 0; the next sample is sample 1; all the way up to the XYZ word in the following SAV packet. In SD interfaces, where there is only one datastream, the 0th sample is a Cb sample; the 1st sample a Y sample, the 2nd sample a Cr sample, and the third sample is the Y' sample; the pattern repeats from there. In HD interfaces, each datastream has its own sample numbering—so the 0th sample of the Y datastream is the Y sample, the next sample the Y' sample, etc. Likewise, the first sample in the C datastream is Cb, followed by Cr, followed by Cb again.

Lines are numbered sequentially, starting from 1, up to the number of lines per frame of the indicated format (typically 525, 625, 750, or 1125 (Sony HDVS)). Determination of line 1 is somewhat arbitrary; however it is unambiguously specified by the relevant standards. In 525-line systems, the first line of vertical blank is line 1, whereas in other interlaced systems (625 and 1125-line), the first line after the F bit transitions to zero is line 1.

Note that lines are deemed to start at EAV, whereas sample zero is the sample following SAV. This produces the somewhat confusing result that the first sample in a given line of 1080i video is sample number 1920 (the first EAV sample in that format), and the line ends at the following sample 1919 (the last active sample in that format). Note that this behavior differs somewhat from analog video interfaces, where the line transition is deemed to occur at the sync pulse, which occurs roughly halfway through the horizontal blanking region.

Author: Thomas Stewart


The term distortion is often applied interchangeably with reduced image quality. Distortion is an individual aberration that does not technically reduce the information in the image; while most aberrations actually mix information together to create image blur, distortion simply misplaces information geometrically. This means that distortion can actually be calculated or mapped out of an image, whereas information from other aberrations is essentially lost in the image and cannot easily be recreated. Please note that in extreme high distortion environments, some information and detail can be lost due to resolution change with magnification or because of too much information being crowded onto a single pixel.

Distortion is a monochromatic optical aberration that describes how the magnification in an image changes across the field of view at a fixed working distance; this is critically important in precision machine vision and gauging applications. Distortion is distinct from parallax, which is a change in magnification (field of view) with working distance (more on parallax is provided in the section on telecentricity in The Advantages of Telecentricity). It is important to keep in mind that distortion varies with wavelength, as shown in Figure 1, and that when calibrating distortion out of a machine vision system the wavelength of the illumination needs to be taken into account. Curves like the one in Figure 1 are very helpful in determining how to calibrate out distortion.

As with other aberrations, distortion is determined by the optical design of the lens. Lenses with larger fields of view will generally exhibit greater amounts of distortion because of its cubic field dependence. Distortion is a third-order aberration that, for simple lenses, increases with the third power of the field height; this means that larger fields of view (a result of low magnification or short focal length) are more susceptible to distortion than smaller fields of view (high magnification or long focal length). The wide fields of view achieved by short focal length lenses should be weighed against aberrations introduced in the system (such as distortion). On the other hand, telecentric lenses typically have very little distortion: a consequence of the way that they function. It is also important to note that when designing a lens to have minimal distortion, the maximum achievable resolution can be decreased. In order to minimize distortion while maintaining high resolution, the complexity of the system must be increased by adding elements to the design or by utilizing more complex optical glasses.

Distortion Plot

Figure 1: Distortion plot showing the variance of distortion with respect to wavelength.

How is Distortion Specified?

Distortion is typically specified as a percentage of the field height. Typically, ±2 to 3% distortion is unnoticed in a vision system if measurement algorithms are not in use. In simple lenses, there are two main types of distortion: positive, barrel distortion, where points in the field of view appear too close to the center; and negative, pincushion distortion, where the points are too far away. Barrel and pincushion refer to the shape a rectangular field will take when subjected to the two distortion types, as shown in Figure 2.

Distortion can be calculated simply by relating the Actual Distance (AD) to the Predicted Distance (PD) of the image using Equation 1. This is done by using a pattern such as dot target shown in Figure 3. 

Equation 1(1)

It is important to note that while distortion generally runs negative or positive in a lens, it is not necessarily linear in its manifestation across the image for a multi-element assembly. Additionally, as wavelength changes, so does the level of distortion. Finally, distortion can change with changes in working distance. Ultimately, it is important to individually consider each lens that will be used for a specific application in order guarantee the highest level of accuracy when looking to remove distortion from a system.

Positive and Negative Distortion

Figure 2: An illustration of positive and negative distortion.

Calibrated Target vs Distortion Pattern

Figure 3: Calibrated target (red circles) vs. imaged (black dots) dot distortion pattern.

Example of Distortion Curves

Figure 4 shows negative, or barrel, distortion in a 35mm lens system. In this specific example, all of the wavelengths analyzed carry almost identical levels of distortion, thus wavelength-related issues are not present. In Figure 5, an interesting set of distortion characteristics can be seen: first, there is separation in the level of distortion for the different wavelengths, and second, both negative and positive distortion is present in this lens. Distortion of this nature is referred to as wave, or moustache, distortion. This is often seen in lenses that are designed for very low levels of distortion, such as those designed for measurement and gauging applications. In this scenario, calibrating the system so that distortion is removed can require special consideration for applications where different wavelengths are used.

Negative Distortion in a Lens

Figure 4: Negative, or barrel, distortion in a lens.

Wave Distortion in a Lens

Figure 5: Wave, or moustache, distortion in a lens.

Geometric Distortion vs TV Distortion: An Important Difference

In lens datasheets, distortion is usually specified in one of two ways: radial, geometric distortion or RIAA TV distortion. Geometric distortion describes the distance between where points appear in the distorted image and where they would be in a perfect system. In practice, this can be measured using a distortion dot target. The difference between the distance from the center of the target to any dot in the field of view and the distance from the center of the image to the same, now misplaced dot (shown in Figure 3), provides the radial distortion percentage calculated with Equation 1.

The measurement of TV distortion is specified by an RIAA imaging standard, and is determined by imaging a square target that fills the vertical field of view. The difference in height between the corners and the center edge of the square is used to calculate the TV distortion with Equation 2; this describes the apparent straightness of a line which appears at the edge of the image, which is essentially the geometric distortion at a single field point. By only specifying distortion at one point in the field, it is possible to misrepresent a non-zero distortion lens as having 0% distortion. In Figure 5, a 0% intercept can be found for any of the wavelengths shown. However, when the full image circle is considered, it is obvious that the lens has non-zero distortion. An example of how TV distortion can be found is shown in Figure 6.

Equation 2(2)

As shown in Figure 5, in real world, compound imaging lens assemblies, distortion is not necessarily monotonic and can change signs across the field of view, which is why a radial distortion plot is preferable to the single RIAA value. Due to the way it is specified, the TV distortion value can be much lower than the maximum geometric distortion value of the same lens, thus it is important to be aware of what type of distortion is being specified when choosing the most appropriate lens for an application.

Barrel and Pincushion Distortion

Figure 6: TV distortion with both barrel and pincushion distortion.

Keystone Distortion

In addition to the previous distortion types mentioned, which are inherent to the optical design of a lens, improper system alignment can also result in keystone distortion, which is a manifestation of parallax (shown in Figure 7a and 7b).

When calibrating an imaging system against distortion, keystone distortion should be considered in addition to radial geometric distortion. Although distortion is often thought of as a cosmetic aberration, it should be carefully considered against other system specifications when choosing the right lens. In addition to the potential for a loss in image information, algorithmic distortion correction takes additional processing time and power, which may not be acceptable in high speed or embedded applications.

Keystone Distortion.

Figure 7: Examples of keystone distortion.

Copyright 2014, Edmund Optics Inc.

What’s the future for USB 3 in machine vision?

Machine vision cameras with an Ethernet interface accounted for half of world revenues in 2014, according to a report recently published by IHS. The research for this new edition was carried out during the first half of 2015 and estimated that the interfaces with the next largest camera revenues were Camera Link and Camera Link HS combined, accounting for 18.9%; and USB (USB 2 and USB 3 combined) with 15.7%.

What is surprising is that, despite the promotion and anticipated market growth of USB 3 cameras, they are forecast to account for only a quarter of USB camera revenues by 2019. USB camera revenues are then projected to account for only 16.3% of the total.

IHS concludes that users will only be motivated to upgrade to USB 3 if a faster speed is necessary for their application. In theory, this is an increase in data rate from 500Mbits/s to 5Gbits/s; although this rate is not always achieved, there is invariably a substantial increase. However, factors that favour USB 2 rather than USB 3 include the longer permissible cable length, the lower power requirement, and the abundance of USB 2 devices and computers with only USB 2 connectivity. The switch from USB 2 to USB 3 will take time.

During the research, some manufacturers commented they were receiving more enquiries about cameras with interfaces for the more general industrial communication technologies: e.g. PROFINET, EtherNet/IP and EtherCAT. IHS believes this suggests a trend to more users considering machine vision as an integral part of the automation system rather than an add-on inspection tool.

Source: IHS Inc. - Market Insight

ace with Sony's IMX174 or IMX249 Sensor - Which Camera is Right for your Application?

ace with Sony's IMX174 or IMX249 Sensor - Which Camera is Right for your Application?

ace with Sony's IMX174 or IMX249 Sensor – Which Camera is Right for your Application?Both the ace models with Sony's IMX174 and Sony's IMX249 offer the latest global shutter technology and outstanding image quality. Nevertheless there are significant differences between the models.

The ace models with Sony's IMX174 CMOS sensor help you achieve not just excellent image quality, but also high speeds up to 155 fps at a resolution of 2.3 MP.

The ace models with Sony's IMX249 are especially well suited for applications that do not require high-speed cameras, but which nevertheless need the excellent image quality of the CMOS sensors from Sony's Pregius series. With frame rates of up to 40 fps and 2.3 megapixels resolution, you'll have the right camera for your application at a lower cost.

An overview of our ace models with Sony's IMX174 CMOS sensor

An overview of our ace models with Sony's IMX249 CMOS sensor

You can find an overview of all our ace models here. You can also filter through the large selection of models based on resolution, frame rate or sensor.

Talk with our Sales Team at IVS Imaging for more information.

Clearwater system at Infocomm 2015

Introducing the World's First PTZPC, and the World's First Truly Cordless Conference Room

With the launch of the Clearwater PTZPC, VDO360 has launched the first truly Cordless Conference Room experience for video conferencing and collaboration.

The Clearwater PTZPC turns the camera and computer into one single unit - no more worrying about USB cables.

Edgewater, Maryland (PRWEB) June 04, 2015

The Clearwater PTZPC is a completely new way to outfit your visual communications space.

By uniting the award-winning Compass camera with the latest Intel 5th generation i5 PC, VDO360 has solved one of the most vexing issues facing communications space design: Where and how to run the cabling.

The Clearwater PTZPC turns the camera and computer into one single unit—no more worrying about USB cables.

This breakthrough device is not only smaller than most USB PTZ cameras, it’s priced less than most of them as well.

The Clearwater System consists of a custom designed Compass camera integrated directly to a fully configured Intel 5th generation i5 PC and mount. Included is VDO360’s new Bluetooth speakerphone, "The Crystal," one of our new Flare IR preset recall buttons, a wireless keyboard and mouse, and the IR remote for camera control.

With the use of WiDi (Wireless Display) capabilities, the Clearwater system can be configured to be completely wireless, with the exception of power to the PTZPC.

With VDO360’s Clearwater System, the days of Cordless Conference Rooms are finally here.

jfields |   Marketing Manager

101 Wrangler, Suite 201 p.  888.446.1301 - ext. 136
Coppell, Texas 75019 f.  469.635.6822 c.  214.679.6326

IVS Imaging is a distributor & manufacturer of machine vision cameras, lenses, cabling, monitors, filters, interface boards & more. IVS is your one stop shop for all your vision needs. IVS Imaging is known across the USA for carrying imaging products from leading manufactures, including Sony Cameras and Accessories, Basler Industrial Cameras, Hitachi Surveillance Cameras, Toshiba Network-based IP Cameras, and Sentech Advanced Digital and OEM cameras. Contact IVS Imaging for all your imaging products, parts, and accessories needs.

About|Contact Us|Customer Service|IVS Blog|IVS TV | Terms Of Use | Privacy Statement
Copyright 2018 by IVS imaging • 101 Wrangler Suite 201, Coppell, Texas 75019 - (888) 446-1301