Distributed Multimedia Systems

James Maxlow

March 24th, 2003

Introduction

This paper will cover the basic issues involved in the structure and implementation of distributed multimedia systems. As our computing world continues to become ever more connected, and our demands for audio, video, and interactive media experiences grows, we can see that it is essential to build systems capable of satisfying those demands to the extent that users can be presented with reliable, robust, quality media presentations. Distributed multimedia systems are the manifestations of the desire to meet those needs.

History

Distributed computing systems have been an active area of research since the earliest days of Arpanet. As networks became widespread, robust, and reliable, the research became recognized by major organizations, such as the IEEE Computer Society, which launched the International Conference on Distributed Computing Systems in the USA in 1979. Advances such as the invention of the compact disc to hold massive amounts of data, the full integration of TCP into the Internet structure, and powerful advances in computing power in the early to mid 1980s brought multimedia processing and applications to the forefront of the computer research community. Although attempts at creating successful video phones had been going on since the 1950s, networked computing brought renewed interest into video conferencing in the 1980s. These areas of interest significantly converged in the late 1980s when work began in earnest on distributed multimedia system research. Another major conference series, the Interactive Distributed Multimedia Systems Workshop (now known as Multimedia Interactive Protocols and Systems), was opened in Germany in 1992. The current focus on the quality of service manager architecture has been the mainstay of research and commercial products for well over a decade, as represented by the ten year old International Workshop on Quality of Service. The twenty-plus year old ICDC now is associated with a specific multimedia conference series, the International Workshop on Multimedia Network Systems and Applications, at which the latest cutting-edge research is presented.

An important concept to note is the window of scarcity, which was theorized in the early 1990s. It refers to the availability of computing task resources at any given point in time. Applied to a distributed multimedia system, it declares that, while all resources increase in the computing world over time, different computing tasks evolve at different times. For example, with our current computing systems, network file access is no longer a large processing task that we need to worry about. The processing of audio data requires far more processing power, but at this point in time desktop systems can handle it easily. The processing of video, however, is still a significant processing task on many computing systems. The window of scarcity theory suggests that maintaining high quality levels in a distributed multimedia system is dependent on how well our computing systems can handle those kinds of distinct processing tasks, i.e., the more processing resources a specific task requires, the less are available for the overall system. As processing power increases, more and more resources for these distinct tasks become available, allowing for higher quality levels relative to the most processor-intensive tasks.

Goal

The motivation behind the creation of such systems is a simple one: users want multimedia experiences over networks. We have lived with television for more than half a century, and are accustomed to having information delivered to us in audio, video, and text form. We now demand the same from the Internet, in addition to the expectations of high levels of interactivity that the Internet has fostered. Distributed multimedia systems are designed to provide quality multimedia delivery across networks for users to consume. They are necessary because the general Internet structure is not appropriate for accomplishing that task in a reliable manner. It is only with a set of highly specialized protocols and transmission architectures that we can give the users what they want.

Definitions

In order to understand how the various components of a distributed multimedia system interact, we should have clear notions of what its associated terms mean.

Bandwidth refers to the rate at which data can pass through a component, which is generally manifested as a network transmission line or bus. This term can also refer to the data rate needed to support a given sampled-data type, such as CD-audio or HDTV. It is known that audio data generally needs orders of magnitude more bandwidth than text data, and video data generally needs orders of magnitude more bandwidth than audio data.

Latency is the amount of time that it takes for one data packet to move through a transmission system from beginning to end. Obviously it is desirable for this to be as low as possible so that utilization of the stream of data packets at the destination can be smooth and regular. A high latency can cause problems in maintaining the time-based synchronicity of the data stream packets at the destination.

Jitter is the rate of change of latency; it represents the speed-up or slow-down of packet transmission times. Rapidly changing latencies present problems for time-based data transmission that must be dealt with by quality of service managers using traffic shaping, scheduling, and other methods.

Loss rate refers to the ratio of acceptable packet loss per unit of time. We know that it is physically impossible to guarantee that all packets will be delivered on time and in order, especially in reference to the Internet, and so we must be ready to possibly drop or throw away packets if they violate those two conditions. The loss rate represents how much we are willing to do without.

Quality of service management is the explicit manipulation of resources from the source through the destination in order to provide a certain level of quality in regards to the delivered and processed data stream at the destination. Because multimedia data is time-based, QOS management is essential to yielding an experience that will meet users’ needs and expectations.

Compression is the process by which data is transformed in such a way as to reduce its size for efficiency in transmission or storage. Compression is an integral part of multimedia delivery systems since bandwidth and other resources are finite.

Traffic shaping is the process by which an output buffer is utilized so that a source that is producing data packets at a fluctuating rate can still transmit them at a fixed rate. This helps avoid the flooding of the destination buffer with too many packets in too short a time period, and it also helps keep transmission resources such as the physical network lines from being underutilized.

Admission control is the process by which a QOS manager accepts or rejects new client connection requests based on the available resources versus the required resources. Admission control is essential in preventing the degradation of quality for currently connected clients.

A flow specification is the grouping of all relevant data regarding the transmission of multimedia data across a system. It spells out packet size, loss rate, acceptable latency, etc. for use by the quality of service manager(s). It is essential for creating a system with quality guarantees that rise above the free-for-all of an Ethernet/Internet unstructured implementation.

Features and Structure

A distributed multimedia system is a complex series of distinct interactions between clients, quality of service managers, and source components. At the top of the hierarchy sits the main QOS manager. At each source component, a local QOS manager exists. The local QOS managers will gather flow specifications from the particular applications at the source components, such as video file-servers. These servers will indicate in the flow specification how much bandwidth they will need, what loss rate is acceptable, etc. If the local QOS managers can guarantee these resources will be available, they will forward the details to the main QOS manager. Once the main QOS manager has received information from all local QOS managers, it responds whether it can guarantee all of the needed resources to the set of source components / local QOS managers. The main QOS manager is then ready to accept client requests.

When a client makes a request of the system, it goes to the main QOS manager. Notice is sent out to the needed local QOS managers to reserve the resources for their respective source components. If all of the resources cannot be reserved for a particular request, the main QOS manager negotiates with the client and with the local QOS managers to see if lower resource requirements can be made. If they cannot be, the client cannot be served. However, if the resources can eventually be reserved at acceptable levels, the components move into action and the client begins to be served. This acceptance/rejection process is referred to as admission control.

If at any time during the client-server interaction the source components need more resources due to varying conditions, the main and local QOS managers negotiate until an agreement for more resources can be reached. If this agreement simply cannot be reached, the main QOS manager must terminate the client connection. In some systems the client may be automatically reconnected when resources become available again. If traffic shaping needs to be done, the main QOS manager will make regular requests of the local QOS managers to initiate changes in the output buffers of the source components. After a client has been successfully served, the main QOS manager frees its associated resources, and informs the local QOS managers to do the same.

Another consideration is that whenever there are multiple clients connected to the system, there must be a protocol for determining how they each will be served in relation to the other. This scheduling protocol must be set at both the main and local QOS manager level. Since the clients need to believe that they are being served concurrently instead of consecutively, a service must be designed that involves some form of intermingled scheduling. A simple multiplexing algorithm can support this. For example, a single server can alternatively send x packets of a stream to client A, then send x packets to client B, then x more to client A, and so on. If the transmission time is adequate in relation to the data reconstruction time on the client end, this will allow for the illusion that each client is the only client.

An alternative to this simple scheduling algorithm is a method that gives priority to certain clients. The priority could be based on defined “importance” of the client’s request, or the time in which the client needs the data, or a combination of these factors. The system then assigns the share of the resources based on the priority level. As an example, the delivery of real-time video to a remote surgeon would take precedence over a video conferencing session between other hospital employees (given that they were both implemented on the same system) and hence would receive more processor time and more network bandwidth.

We can see in distributed multimedia systems that there is a hard limit to the number of clients that can be served at any given time. If there are not enough resources available to be reserved for another client connection, the client is refused. This limit can vary over time due to the fact that each client’s request may require differing resource levels, but it is a limit nonetheless. This is in contrast to a general web server, for example, which may continue to attempt to serve more and more client requests until all client interactions slow to a standstill. The QOS management system guarantees that the addition of clients will not degrade the performance for existing clients. Client requests that would degrade the system are simply refused.

It should be noted that there are two main types of distributed multimedia systems: Internet-based, and proprietary. Internet-based systems consist of primarily software systems that function as quality of service managers. Proprietary systems, however, can and often do consist of custom hardware, in addition to the software. By involving custom hardware at the source, destination, and transmission lines, one can completely avoid the inefficiencies of the Internet and can guarantee a higher level of quality to the service. This is by far a more expensive solution, though, and it not applicable in anything but tightly-controlled LAN environments, due to the additional use of custom hardware, or to environments that use dedicated connections between the client and the QOS main manager. This makes such systems impractical for general use. Internet-based systems don’t require specialized equipment. Moreover, such systems can use special hardware to enhance the quality of the service, but it is not required. The quality of service managers would simply take note of any specialized resources and would adjust their service levels accordingly. Internet-based systems, then, are ideal for laying on top of the current Internet structure so that the billions of existing Internet connected systems can be leveraged at no extra cost to the end user. This robustness, however, often results in lower maximum quality levels.

How to Use

A distributed multimedia system is a combination of hardware and software resources. There is no single “product” available to buy on the whole, save for expensive end-to-end proprietary systems. However, we can build a system by combining separate components in the correct manner. At the source end, we need multiple computers hosting the data. The data needs to be mirrored across these computers so that multiple clients can receive identical streams without degrading performance. There should be one computer acting as the controller of the system by hosting the main QOS manager that is not involved in data storage. The source computers should have high-bandwidth connections to the outgoing transmission line of the system.

On the software side, we need main and local QOS managers. These can be developed internally, or can be purchased as a separate product. The quality of the system will be directly related to the quality of the QOS managers. The main manager should of course be connected in such as way as to accept client requests and to negotiate with the local QOS managers on the source computers.

Applications

Just what would a distributed multimedia system be used for? In reference to the Internet, numerous examples are available. Visitors to news websites may now expect live or prerecorded video feeds to be available. Weblogs that cover personal stories from around the world may use webcams to enhance the experiences that their authors relate. Video-on-demand services for set top boxes or multimedia computing systems are expected to generate enormous public interest. Engineers may need to exchange 3-d modeling data with their colleagues around the world while simultaneously using video-conferencing to work together. Remote-controlled exploratory or manipulative robots that explore the ocean floor, effect repairs in nuclear reactors, or take environmental measurements in conditions hazardous to humans may need to transmit enormous amounts of multimedia delivery to their handlers. Weather stations may need to gather and broadcast real-time data to sites across the planet. Surgeons may need perfect video streams to operate on patients over great distances using remote-controlled tools.

In each of these cases, quality delivery of data is essential. This is true whether the concern is business-related, such as charging customers for access to video feeds, or if the concern is related to the health and survival of human beings, such as the case of operating on patients remotely. We cannot rely exclusively on standard Internet procedures to satisfy these concerns.

Significant Points

To recap the main components in a distributed multimedia system:

A set of source components make declarations(flow specifications) to quality of service managers that detail the resources they would need to perform a given application (such as video streaming.) Clients contact the main QOS manager and request services. The QOS managers then evaluate the request (admission control) and negotiate to secure the resources needed by the sources to serve the clients at certain quality levels. The services then begin based on a scheduling algorithm. Dynamic adjustment of reserved resources occurs at the QOS manager level to maintain quality levels in varying conditions. When the services are completed, the resources are freed.

Summary

We can see that if we are to expect certain quality levels from the multimedia streams we receive over the Internet, a specialized resource management system must be put in place. This management system, known as a distributed multimedia system, takes form as the interaction of source components that know what resources they require and quality of service managers that have the ability to negotiate for and reserve these resources. This resource reservation process is the core idea behind the guarantee of quality for any given client-server interaction. The reservation process simultaneously satisfies client needs and prevents degradation of service.