Contact at mumbai.academics@gmail.com or 8097636691
Responsive Ads Here

Wednesday, 31 January 2018

Cooperative Caching in Wireless P2P Networks: Design, Implementation, and Evaluation(2010)

Cooperative Caching in Wireless P2P Networks: Design, Implementation, and Evaluation(2010)

Abstract
Some recent studies have shown that cooperative cache can improve the system performance in wireless P2P networks such as ad hoc networks and mesh networks. However, all these studies are at a very high level, leaving many design and implementation issues unanswered. In this paper, we present our design and implementation of cooperative cache in wireless P2P networks, and propose solutions to find the best place to cache the data. We propose a novel asymmetric cooperative cache approach, where the data requests are transmitted to the cache layer on every node, but the data replies are only transmitted to the cache layer at the intermediate nodes that need to cache the data. This solution not only reduces the overhead of copying data between the user space and the kernel space, it also allows data pipelines to reduce the end-to-end delay. We also study the effects of different MAC layers, such as 802.11-based ad hoc networks and multi-interface-multichannel-based mesh networks, on the performance of cooperative cache. Our results show that the asymmetric approach outperforms the symmetric approach in traditional 802.11-based ad hoc networks by removing most of the processing overhead. In mesh networks, the asymmetric approach can significantly reduce the data access delay compared to the symmetric approach due to data pipelines.
Existing System
Existing cache network although cooperative cache has been implemented by many researchers, these implementations are in the Web environment, and all these implementations are at the system level. As a result, none of them deals with the multiple hop routing problems and cannot address the on-demand nature of the ad hoc routing protocols. To realize the benefit of cooperative cache, intermediate nodes along the routing path need to check every passing-by packet to see if the cached data match the data request. This certainly cannot be satisfied by the existing ad hoc routing protocols.
Proposed System
In this project, we present our design and implementation of cooperative cache in wireless P2P networks. Through real implementations, we identify important design issues and propose an asymmetric approach to reduce the overhead of copying data between the user space and the kernel space, and hence to reduce the data processing delay.
The proposed algorithm well considers the caching overhead and adapts the cache node selection strategy to maximize the caching benefit on different MAC layers. Our results show that the asymmetric approach outperforms the symmetric approach in traditional 802.11- based ad hoc networks by removing most of the processing overhead.
Algorithm Used
A greedy cache placement algorithm.
Algorithm Details
The proposed algorithm well considers the caching overhead and adapts the cache node selection strategy to maximize the caching benefit on different MAC layers. Our results show that the asymmetric approach outperforms the symmetric approach in traditional 802.11- based ad hoc networks by removing most of the processing overhead. In mesh networks, the asymmetric approach can significantly reduce the data access delay compared to the symmetric approach due to data pipelines.
Modules:-
1. Cooperative Caching Module:
Many routing algorithms (such as AODV and DSR (Dynamic Source Routing) ) provide the hop count information between the source and destination. Caching the data path for each data item reduces bandwidth and power consumption because nodes can obtain the data using fewer hops. However, mapping data items and caching nodes increase routing overhead,
2. Cache and routing module:
There is no de facto routing protocol for wireless P2P networks currently. Implementing cooperative cache at the network layer requires these cache and routing modules to be tightly coupled, and the routing module has to be modified to add caching functionality. However, to integrate cooperative cache with different routing protocols will involve tremendous amount of work
3. Asymmetric Approach Module:
Our asymmetric caching approach has three phases
Phase 1: Forwarding the request message.
After a request message is generated by the application, it is passed down to the cache layer. To send the request message to the next hop, the cache layer wraps the original request message with a new destination address, which is the next hop to reach the data server (real destination). Here, we assume that the cache layer can access the routing table and find out the next hop to reach the data center. This can be easily accomplished if the routing protocol is based on DSR or AODV. In this way, the packet is received and processed hop by hop by all nodes on the path from the requester to the data server
Phase 2: Determining the caching nodes.
When a request message reaches the data server (the real data center or the intermediate node that has coached the requested data), the cache manager decides the caching nodes on the forwarding path, which will be presented in . Then, the ids of these caching nodes are added to a list called Cache List, which is encapsulated in the cache layer header
Phase 3: Forwarding the data reply. Unlike the data
Request, the data reply only needs to be processed by those nodes that need to cache the data. To deliver the data only to those that will cache the data, tunneling techniques are used. The data reply is encapsulated by the cache manager and tunneled only to those nodes appearing in Cache List.

4. Cache routing simulation module:

There are two routing protocol used:

·        Ad-hoc On-demand Distance Vector (AODVrouting protocol

·        Dynamic Source Routing (DSR)

The data server needs to measure the benefit of caching a data item on an intermediate node and use it to decide whether to cache the data. After an intermediate node (Ni) caches a data item, node (Ni) can serve later requests using the cached data, instead of forwarding the requests to the data server, saving the communication overhead between node(Ni) and the data center. However, caching data at node (Ni) increases the delay of returning the data to the current requester, because it adds extra processing delay at Ni, and the data reassembly at node (Ni) may affect possible pipelines.
System Requirements:
Hardware Requirements:
PROCESSOR             :          PENTIUM IV 2.6 GHz
RAM                           :           512 MB DD RAM
MONITOR                 :           15” COLOR
HARD DISK               :           20 GB
FLOPPY DRIVE         :          1.44 MB
CDDRIVE                    :          LG 52X
KEYBOARD                :          STANDARD 102 KEYS
MOUSE                      :          3 BUTTONS
Software Requirements:
Front End              :  Java, JFC (Swing)
Backend                :  MS-Access (Data Base)
Tools Used            :  Eclipse 3.3
Operating System:  Windows XP/7

No comments:

Post a Comment