Tuesday, January 27, 2009

Digital Television

Digital Television is a technology that resulted from research in 1980s and had developed high definition television pictures. While Digital Television offers improved images and CD quality sound, its main attraction in media market is its ability to compress multiple separate channels into same space used to transmit a single analog channel. As a result the cost of broadcasting is significantly lower.

In contrast to analog television signal systems that are continuously varying voltage to represent the picture, a digital picture is composed of thousands of pixels (picture elements). The digital signal relays numerical information about the location and timing of each pixel as well as its brightness or luminance and its color or chrominance.

Digital transmissions have two key advantages. If properly implemented, they can result in less noise - a digital receiver must simply determine whether or not a bit is present as opposed to the need to interpret small, continuously varying voltage differences as required in analog systems. As a result, satellite link using compression and a digital decoder / receiver can be relayed at almost 4 dB below the power level of an analog signal to achieve a similar or a better quality picture! In addition, digital signals are more amenable than analog to the use of very secure and reasonably priced encryption methods.

Compression methods are used to reduce the data rate in a digital video signal to a fraction of its original value by removing redundancy. Digitally compressed video offers many advantages including nearly error-and ghost free transmission and a dramatic increase in the video capacity of an existing satellite communication channels.

Since it takes a large amount of information or a large bandwidth to just send a television signal, one of the most useful manipulations of the signal is to use a lower data rate “repeat” message to transmit only those portions of the picture that are repeated or redundant and are thus predictable. This is in essence how compression reduces the data required for television images.

Spatial, Temporal or statistical redundancy can occur in repeated parts of a picture and lie within a single frame - a spatial redundancy where, for example, the entire background consists of identical pixels. Temporal redundancy also can occur where the same background is repeated frame after frame. Statistical redundancy refers to those regularly occurring components of the TV signal such as line and frame sync signals. If this redundancy is eliminated then much less information must be transmitted to recreate the television picture at the receiver.

The excess information capacity created in this process can be used to either lower the amount of power required to send the signal, to use less bandwidth to send the signal, to send more television programs or other information within the same bandwidth.

Whereas the picture quality of an analog TV/FM system directly depends on the carrier-to-noise ratio in reception, the picture quality of a digital encoded TV signal is a function of

•The data rate and the compression algorithm, which determine the intrinsic quality of the service

•The nature of the picture (critical or not)

•The quality of the transmission (C/N in reception), which determines the availability threshold of the service (related to the necessary downlink power from the satellite).

Since the intrinsic quality of a digital TV system is only set by the data rate and the compression algorithm, the transmission path is fully transparent when dimensioned correctly. On the other hand, the lower the data rate, the narrower the bandwidth. It is possible to choose the best trade-off between the picture quality, directly related to the necessary power for satellite transmission, and the occupied bandwidth.

When compressed digital TV signals are associated with digital modulation having a limited number of state, such as QPSK with an inner code (convolutional coding / Viterbi decoding) and with an outer code RS (Reed-Solomon), the performance under a clear sky condition, both in terms of necessary power and occupied bandwidth, exceeds that of analog systems having an equivalent picture quality. This quality remains unchanged until the service is interrupted, at some C/N threshold level.

There are many ways to compress video images. To retain compatibility between future digital consumer electronic devices and electronic transmission industries, including cable television, the need arose for a single, worldwide video compression standard. In 1988, The Motion Picture Experts Group (MPEG) was formed as a subcommittee of the joint International Standards Organization (ISO) and the International Electrotechnical Commission (IEC) to establish this standard. Activities of the MPEG group have been concentrated into two areas: MPEG1 and MPEG2.

The second phase of MPEG, called MPEG-2, deals with high-quality coding of possibly interlaced video, of either standard or High Definition Television (HDTV), primarily in the 4 to 15 Mbit/s range. However, it can function at rates up to 100 Mbit/s. A wide range of applications, bit rates, resolutions, signal qualities and services are addressed, including all forms of digital storage media, television (including HDTV), broadcasting and communications. Enhancements introduced in MPEG-2 codecs provide:

•New field and frame prediction modes for interlace scanning

•Improved quantization

•New intra-frame variable length codes (VLC)

•Scaled extension of resolutions for compatibility, hierarchical service and robustness, and

•A new two-layer system for multiplexing and transport that provides high and low priority video packet/cells.

Digital technology for Video Broadcast has been proven over the years. One of the most common systems for compression used today is the MPEG system. This system allows a compression of around two/three to one compression. MPEG 2 allows a compression of ten to one, which permits ten channels of video to fit into a bandwidth earlier occupied by only one analog video.

MPEG4 is upto the time of this writing primarily used for Internet Boardcasts while MPEG3 or MP3 as it is more commpnly known as has become the defacto standard for digital music compression.

No comments:

Post a Comment