Yildirim, SavasMalik, GarimaCevik, MucahitBasar, Ayse2026-04-042026-04-042024979-833150483-0https://doi.org/10.1109/CASCON62161.2024.10837905https://hdl.handle.net/11411/1023834th Annual International Conference on Collaborative Advances in Software and Computing, CASCON 2024 -- 11 November 2024 through 13 November 2024 -- Toronto -- 206187In the field of requirements engineering (RE), anaphoric ambiguity can negatively impact the quality of requirements and could even threaten the success of a project. If different stakeholders like testers or customers interpret software requirements differently, the system might fail to pass the customer validation stage. On the other hand, a robust anaphora resolution model clarifies the writing process of requirements by accurately indicating the pronoun references. In this study, we exploited the power of generative NLP pipelines and compared their performance with the extractive Question Answering (or sequence labeling) technique. We conducted extensive numerical experiments including text-to-text pipelines and compared them with encoder-based models on two public requirements datasets. Our experiments revealed that a sufficiently large T5 model can yield better results than encoder-based models. We've utilized methods such as Lora to effectively address the complexity of training large language models. Our study indicated that the generative approach outperforms classification-based models for anaphora resolution tasks in Software Requirement texts. © 2024 IEEE.eninfo:eu-repo/semantics/closedAccessAmbiguityAnaphora ResolutionRequirements EngineeringTransformersAnaphora Resolution in Software Requirements Engineering: A Comparison of Generative NLP Pipelines and Encoder-Based ModelsConference Paper2-s2.0-8521770989710.1109/CASCON62161.2024.10837905N/A