LightBlog
Contact at mumbai.academics@gmail.com or 8097636691/9323040215
Responsive Ads Here

Friday, 23 February 2018

Latency Equalization as a New Network Service Primitive(2012)


Latency Equalization as a New Network

 Service

 Primitive(2012)

Abstract
Multiparty interactive network applications such as teleconferencing, network gaming, and online trading are gaining popularity. In addition to end-to-end latency bounds, these applications require that the delay difference among multiple clients of the service is minimized for a good interactive experience. We propose a Latency EQualization (LEQ) service, which equalizes the perceived latency for all clients participating in an interactive network application. To effectively implement the proposed LEQ service, network support is essential. The LEQ architecture uses a few routers in the network as hubs to redirect packets of interactive applications along paths with similar end-to-end delay. We first formulate the hub selection problem, prove its NP-hardness, and provide a greedy algorithm to solve it. Through extensive simulations, we show that our LEQ architecture significantly reduces delay difference under different optimization criteria that allow or do not allow compromising the per-user end-to-end delay. Our LEQ service is incrementally deployable in today’s networks, requiring just software modifications to edge routers.
Introduction
The increased availability of broadband access has spawned a new generation of netizens. Today, consumers use the network as an interactive medium for multimedia communications and entertainment. This growing consumer space has led to several new network applications in the business and entertainment sectors. End-to-end delay requirements can be achieved by traffic engineering and other QoS techniques. However, these approaches are insufficient to address the needs of multiparty interactive network applications that require bounded delay difference across multiple clients to improve interactivity. Previous work on improving online interactive application experiences considered application-based solutions either at the client or server side to achieve equalized delay. Client side solutions are hard to implement because they require that all clients exchange latency information to all other clients. They are also vulnerable to cheating .Server-side techniques rely on the server to estimate network delay, which is not sufficiently accurate in some scenarios. Moreover, this delay estimation places computational and memory overhead on the application servers, which limits the number of clients the server can support. Previous studies have investigated different interactive applications, and they show the need for network support to reduce delay difference since the prime source of the delay difference is from the network. The importance of reducing latency imbalances is further emphasized when scaling to wide geographical areas as witnessed by a press release from AT&T.
We design and implement network-based Latency EQualization (LEQ), which is a service that Internet service providers (ISPs) can provide for various interactive network applications. Compared to application-based latency equalization solutions, ISPs have more detailed knowledge of current network traffic and congestion, and greater access to network resources and routing control. Therefore, ISPs can better support latency equalization routing for a large number of players with varying delays to the application servers. This support can significantly improve game experience, leading to longer play time and thus larger revenue streams.
Problem Description
Client side solutions are hard to implement because they require that all clients exchange latency information to all other clients. They are also vulnerable to cheating. Server-side techniques rely on the server to estimate network delay, which is not sufficiently accurate in some scenarios. We design and implement network-based Latency EQualization (LEQ), which is a service that Internet service providers (ISPs) can provide for various interactive network applications. Compared to application-based latency equalization solutions, ISPs have more detailed knowledge of current network traffic and congestion, and greater access to network resources and routing control.  Therefore, ISPs can better support latency equalization routing for a large number of players with varying delays to the application servers. This support can significantly improve game experience, leading to longer play time and thus larger revenue streams. Due to the problems of client-side solutions, several delay compensation schemes are implemented at the server side. However, while introducing CPU and memory overhead on the server, they still do not completely meet the requirements of fairness and interactivity. For example, with the bucket synchronization mechanism , the received packets are buffered in a bucket, and the server calculations are delayed until the end of each bucket cycle. The performance of this method is highly sensitive to the bucket (time window) size used, and there is a tradeoff between interactivity versus the memory and computation overhead on the server. In the time warp synchronization scheme, snapshots of the game state are taken before the execution of each event. When there are late events, the game state is rolled back to one of the previous snapshots, and the game is reexecuted with the new events. This scheme does not scale well for fast-paced, high-action games because taking snapshots on every event requires both fast computation and large amounts of fast memory, which is expensive. In , a game-independent application was placed at the server to equalize delay differences by constantly measuring network delays and adjusting players’ total delays by adding artificial lag. However, experiments in [12 suggest that using server-based round-trip-time measurements to design latency compensation across players fails in the presence of asymmetric latencies.
Existing System
Improving online interactive application experiences considered application-based solutions either at the client or server side to achieve equalized delay. Client side solutions are hard to implement because they require that all clients exchange latency information to all other clients. They are also vulnerable to cheating. Server-side techniques rely on the server to estimate network delay, which is not sufficiently accurate in some scenarios. This delay estimation places computational and memory overhead on the application servers, which limits the number of clients the server can support . Previous studies have investigated different interactive applications, and they show the need for network support to reduce delay difference since the prime source of the delay difference is from the network. The importance of reducing latency imbalances is further emphasized when scaling to wide geographical areas as witnessed by a press release from AT&T .
Proposed System
The design and implement network-based Latency EQualization (LEQ), which is a service that Internet service providers (ISPs) can provide for various interactive network applications. Compared to application-based latency equalization solutions, ISPs have more detailed knowledge of current network traffic and congestion, and greater access to network resources and routing control. Therefore, ISPs can better support latency equalization routing for a large number of players with varying delays to the application servers. This support can significantly improve game experience, leading to longer play time and thus larger revenue streams. Our network-based LEQ service provides equalized-latency paths between the clients and servers by redirecting interactive application traffic from different clients along paths that minimize their delay difference. We achieve equalized-latency paths by using a few routers in the network as hubs, and interactive application packets from different clients are redirected through these hubs to the servers. Hubs can also be used to steer packets away from congested links.
Our LEQ architecture provides a flexible routing framework that enables the network provider to implement different delay and delay difference optimization policies in order to meet the requirements of different types of interactive applications.
To achieve LEQ routing, we formulate the hub selection problem, which decides which routers in the network can be used as hubs and the assignment of hubs to different client edge routers to minimize delay difference. We prove that this hub selection problem is NP-hard and inapproximable. Our LEQ routing significantly reduces delay difference in different network settings
Advantages:
LEQ is achieved by optimized hub selection and assignment. Each client edge router is assigned to more than one hub, so it has the flexibility to select among its assigned hubs to avoid congestion. Our LEQ routing significantly reduces delay difference in different network settings
Module List
Ø  Login
Ø  Client Node
Ø  Hub Selection process
Ø  Server Node
Ø  Latency EQualization (LEQ)
Ø  LEQ Hub Routing
Module Description
Login
In this module the user can get in to the system by enter the username and password. The user can register them self in the particular node. Therefore we can easily identify a Client node, where it is resident.
Client Node
Client traffic from an interactive application enters the provider network through edge routers R1 and R2. The server for the interactive application is connected to the network through edge router R10. Using the LEQ routing architecture, R6 and R7 are chosen as hubs for R1  , R7 and R8 are chosen as hubs for R6.Usingredirection through hubs, R1 has two paths to the server edge router R10:R1-R6-R10 and R1-R7-R10, both of which have a delay of 10ms. R2 has two paths R2-R7-R10 and R2-R8-R10, whole delay is also 10ms.Each client edge router is assigned to more than one hub, so it has the flexibility to select among its assigned hubs to avoid congestion.
Server Node
R1 ,R2 … are routers in the network. Server side edge router R10.Server node from an interactive application enters the provider network through edge routersR10.The number of the links represent the latency of each link R6,R7 and R8 are the hubs for R1,R2 and R10.Client edge routers redirect the application packets corresponding to the LEQ service through the hubs to the destined servers. By redirecting through the hubs, application packets from different client edge routers with different delays to the servers are guaranteed to reach the servers within a bounded delay difference. The average delay difference for packets from different sites to the server through these two hubs.
Hub selection process
To solve the hub selection problem, we design a simple greedy heuristic algorithm
to pick the hubs. Our algorithm first sorts in increasing order all the delays from each client edge router through each possible hub to its associated servers. This sorted list is denoted by the array.
One could use source routing to address the problem of latency equalization. Source routing can be used by the sender to choose the path taken by the packet. However, this requires that all clients are aware of the network topology and coordinate with each other to ensure that the delay differences are minimized.
LEQ
We design and implement network-based Latency EQualization (LEQ), which is a service that Internet service providers (ISPs) can provide for various interactive network applications. Compared to application-based latency equalization solutions, ISPs have more detailed knowledge of current network traffic and congestion, and greater access to network resources and routing control. Therefore, ISPs can better support latency equalization routing for a large number of players with varying delays to the application servers. This support can significantly improve game experience, leading to longer play time and thus larger revenue streams.
Network-based LEQ service provides equalized-latency paths between the clients and servers by redirecting interactive application traffic from different clients along paths that minimize their delay difference.
LEQ Hub Routing
The network-based LEQ architecture is implemented using a hub routing approach: Using a small number of hubs in the network to redirect application packets, we equalize the delays for interactive applications. To explain the basic LEQ architecture, we consider a single administrative domain scenario and focus on equalizing application traffic delays between the different client edge routers and the server edge routers without considering access delay. Based on the application’s LEQ requirements, the application traffic from each client edge router is assigned to a set of hubs. Client edge routers redirect the application packets corresponding to the LEQ service through the hubs to the destined servers. By redirecting through the hubs, application packets from different client edge routers with different delays to the servers are guaranteed to reach the servers within a bounded delay difference. 
Software Requirements
Front End/GUI Tool                           : Microsoft Visual studio 2008
Operating System                               : Windows family
Language                                            : C#.NET
Backend                                              : SQL Server 2005
Hardware Specification
Processor                                            : Pentium dual core
RAM                                                  : 1 GB
Hard Disk Drive                                : 80 GB

No comments:

Post a Comment