N10-007 Identify the basics elements of unified communication technologies


Long-distance calls are expensive, in part because it is costly to maintain phone lines and employ technicians to keep those phones ringing. Voice over IP (VoIP) provides a cheaper alternative for phone service. VoIP technology enables regular voice conversations to occur by traveling through IP packets and via the Internet. VoIP avoids the high cost of regular phone calls by using the existing infrastructure of the Internet. No monthly bills or expensive long distance charges are required. But how does it work?

Like every other type of network communication, VoIP requires protocols to make the magic happen. For VoIP, one such protocol is Session Initiation Protocol (SIP), which is an application layer protocol designed to establish and maintain multimedia sessions, such as Internet telephony calls. This means that SIP can create communication sessions for such features as audio/videoconferencing, online gaming, and person-to-person conversations over the Internet. SIP does not operate alone; it uses TCP or UDP as a transport protocol. Remember, TCP enables guaranteed delivery of data packets, whereas UDP is a fire-and-forget transfer protocol.


The role of Internet is not only increasing in businesses but also in the world of entertainment. It is being used to watch television shows, news broadcasts, videos etc. To view this content in a speedy fashion a lot has to be done. Protocols taking care of speed have to be working, as videos are latency sensitive. The protocols that can be used are TTCP and UDP. The three protocols that are commonly used are: RTP (Real Time Protocol), RTSP (Real Time Streaming Protocol) and RTCP (Real Time Control Protocol)


For the networks of today, speed is a crucial factor. Networks are congested with traffic and bandwidth is limited. It is necessary to have policies ensuring the optimization of the bandwidth. These policies and strategies collectively are known as QoS. The following are a part of QoS:

  • Traffic Shaping
  • Load Balancing
  • Caching

Quality of Service or QoS is the term used to describe the strategies involved in managing and increasing the flow of traffic. The administrators are able to predict and monitor the use of bandwidth and ensure its availability across the network for applications, which require it. These applications can be broken down as:

  • Latency Sensitive: These applications are demanding when it comes to bandwidth as lag time affects their efficacy.
  • Latency Insensitive: Managing latency insensitive applications is also a part of managing bandwidth. Bulk data transfers are latency insensitive transfers.

Bandwidth is a limited resource with networks and the traffic increasing day by day. Latency sensitive traffic is demanding when it comes to bandwidth. It is best to prioritize the traffic to ensure that traffic gets delivered on time. QoS plays an important role by ensuring that the applications like video conferencing do not affect traffic throughput negatively. It queues the traffic depending on the time at which it has to be delivered.

Latency-Sensitive High-Bandwidth Applications

Many applications are highly demanding when it comes to bandwidth. The high bandwidth applications are: VoIP and Video Applications.

Traffic Shaping: It is a QoS strategy and works on prioritization of the transmission. It aims at reducing latency. Regulating the quantum of data moving in and out of the network does this. The network policy decides how data will be categorized, queued and directed. Several strategies are used to regulate the data. The common methods that are followed are:

  • By Application: In this the traffic is shaped by categorizing the traffic on the basis of its types and assigning a bandwidth limit. FTP can be used to categorize traffic and specifying that no more than 4Mbps will be dedicated for FTP traffic.
  • Network traffic per user: Traffic sharing can also be done on per user basis, that is, bandwidth is allocated amongst the users. This actually does not limit the content but limits the speed at which the content can be viewed.
  • Priority Queuing: This is queuing the traffic depending on how important is the traffic to the purposes for which the network has been set up. For example, In an academic network, use of the network for recreational purposes can be limited.

Load Balancing: As the demands on an organizations servers and systems increases so does the load on that server. Load balancing is a strategy adopted to distribute the load amongst the different networked systems. The process of distribution is termed as server farm. It results in sharing of demand between multiple CPUs, network links, and hard disks. The results are increased response time, distributed processing and optimal resource utilization. It is up to the server farms to ensure delivery on Internet services. Websites that are considered high performance rely on server farms for scalability, reliability and low latency.

Caching Engines: Caching is considered important when it comes to working on optimizing network traffic. It is used by proxy servers to limit the client requests being sent to the internet. While caching with the proxy server, a copy of the requested page is maintained in the cache area and on a subsequent request from the same or a different client on the same network is made, the copy is made available to the user rather than going back to the net. It goes a long way in reducing the traffic that is filtered to the internet and results in gains for the network. The administrators consider the following while deciding what to cache:

  • What sites are to be added to the cache;
  • How long will the information be stored in the cache;
  • How often is the cached information to be updated;
  • What will be the size of the cached information;
  • What type of content is to be put in the cache;
  • Who are authorized to access the cache?

The advantages of using caching are:

  • Increased Performance: Cached information is stored on the local systems which are closer to the user system. This ensures retrieval of information at a faster rate.
  • Data Availability: There can be situation where the data or applications being accessed are unavailable for reasons of failures. In such situations information stored in the cache area proves useful.