Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs

Published in AAAI Responsible Language Model (ReLM) Workshop, 2024

Recommended citation: @misc{tytarenko2024breaking, title={Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs}, author={Stepan Tytarenko and Mohammad Ruhul Amin}, year={2024}, eprint={2401.16638}, archivePrefix={arXiv}, primaryClass={cs.CL} } http://stepantita.github.io/files/SpaceModel.pdf

In this paper, we present a framework that allows for maintaining generalizability, and enhances the performance on the downstream task by utilizing task-specific context attribution

  • 🎉 Best paper award
  • 🌟 Spotlight Presentation
  • 🌟 AGI Leap Summit 2024

Download paper here

Recommended citation: @misc{tytarenko2024breaking, title={Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs}, author={Stepan Tytarenko and Mohammad Ruhul Amin}, year={2024}, eprint={2401.16638}, archivePrefix={arXiv}, primaryClass={cs.CL} }