Abstract
One of the important problems in content distribution networks is how to maintain the consistency of content at replicas with the origin server, especially for those documents changing dynamically. In this paper, we propose a new hybrid consistency algorithm that will generate less traffic than the traditional propagation approach, and the invalidation approach. The basic scheme is extended to the case in which requests are not evenly distributed over all replicas. We propose a hierarchical framework which allows replicas at different levels to make a decision based on the statistics collected. Extensive simulations are performed to examine how the traffic generated and the freshness ratio at replicas are affected by various parameters. We experiment with a wide range of request rates, update frequencies, and numbers of replicas. The results show that our approach can take advantage of content distribution networks and significantly reduce the traffic generated.
Original language | English |
---|---|
Pages (from-to) | 916-926 |
Number of pages | 11 |
Journal | Journal of Parallel and Distributed Computing |
Volume | 63 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2003 |
Bibliographical note
Funding Information:This work was supported in part by the National Science Foundation under Grant CCR-0204304. The author would like to thank Dr. Jim Griffioen for his discussions of the subject and anonymous reviewers for their comments on the paper.
Funding
This work was supported in part by the National Science Foundation under Grant CCR-0204304. The author would like to thank Dr. Jim Griffioen for his discussions of the subject and anonymous reviewers for their comments on the paper.
Funders | Funder number |
---|---|
National Science Foundation (NSF) | CCR-0204304 |
Keywords
- Data replication
- Invalidation
- Propagation
- Web caching
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computer Networks and Communications
- Artificial Intelligence