
A Comparitive Study of Text Summarization Techniques
This project focuses on evaluating six abstractive summarization models (Seq2Seq with Attention, BERTSUMABS, BART, T5, PEGASUS, XLNet) on benchmark datasets (CNN/DailyMail, XSum, Gigaword). The models were analyzed using ROUGE and BLEU metrics to measure fluency, coherence, and content accuracy.