Enter your keyword

2-s2.0-85041183351

[vc_empty_space][vc_empty_space]

Performance evaluation of progressive caching policy on NDN

Yanuar M.R.a, Manaf A.a

a School of Electrical Engineering and Informatics, Bandung Institute of Technology, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2017 IEEE.Named Data Networking (NDN) is a promising solution to the explosion of multimedia traffic on the Internet. NDN shifts the paradigm of host-centric approach to name-based approach, allowing the retrieval of contents solely by their name. To support this, NDN has caching as one of its main feature. Seen as a critical aspect to maximize its potential, caching in NDN has invited a lot of researches related to it. Progressive Caching Policy (PCP) is one of caching policy that is proposed specifically for NDN. Its low complexity makes it one of potential caching policies that can cope with the scalability of the Internet. However, the performance of PCP itself still has not been evaluated thoroughly. PCP also has undefined variables, making the performance evaluation by comparing with other caching strategies hard. In this paper, we propose a modification to the algorithm of PCP to make the performance comparison fair. We then evaluate the performance of modified PCP with other caching strategies. We show that our proposed modification of PCP outperforms other caching strategies on most scenarios, especially on the scenarios that increase interest aggregation, such as cache size expansion, λ value increase, or α value decrease. On the k-ary tree topology, hit rate gain is also increasing as the number of arity and depth grows. But, compared to Leave Copy Down (LCD), the cache hit rate increase are fairly small and not significant enough, which is at most by 1.34% on the scenario with 5 arity.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Cache hit rates,Caching policy,Caching strategy,Multimedia traffic,Name based,Named data networkings,Performance comparison,performance evaluation[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Caching Strategy,Named Data Networking,performance evaluation,Progressive Caching Policy[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ICAICTA.2017.8090996[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]