The exponentially evolving size of information today has made it difficult to find relevant information quickly and efficiently. A good extractive text summarizer not only provides the most significant information from the document but also helps the user to decide the relevance of the information. The proposed method is a knowledge-based, generic, extractive text summarization technique. Our approach is based on the centrality of a sentences in the graphical representation of the documents. The graph is constructed using the pair-wise softcosine similarity measures between the sentences derived using the S semantic relations presented in WordNet lexical database. Eigenvector centrality measure outperforms the weighted degree, betweenness and closeness centrality measures. The resultant summary is compared against the gold-standard summaries of BBC news articles from year 2004 to 2005 and DUC 2007 datasets. The ROUGE-I, -2 and -Lmetrices are used to evaluate the results and found that our approach performs better than LexRank, TextRank, Luhn and LSA baseline text summarizers.
Automatic Text Summarization using Soft-Cosine Similarity and Centrality Measures
05.11.2020
452403 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Zonal centrality measures and the neighborhood effect
Online Contents | 2010
|Sequence‐based centrality measures in maritime transportation networks
Wiley | 2020
|An evaluation of centrality measures used in cluster analysis
American Institute of Physics | 2014
|