Posts by Collection
portfolio
publications
Ukrainian News Corpus As Text Classification Benchmark
Published in ICTERI Conference, 2022
In this paper we describe a framework for simple classification dataset creation with minimal labeling effort. We create a dataset for Ukrainian news classification and compare several pretrained models for Ukrainian language in different training settings.
Recommended citation: Panchenko, D., Maksymenko, D., Turuta, O., Luzan, M., Tytarenko, S., Turuta, O. (2022). Ukrainian News Corpus as Text Classification Benchmark. In: Ignatenko, O., et al. ICTERI 2021 Workshops. ICTERI 2021. Communications in Computer and Information Science, vol 1635. Springer, Cham. https://doi.org/10.1007/978-3-031-14841-5_37 http://stepantita.github.io/files/NewsClassificationBenchmark.pdf
Forecasting Bitcoin Price Trends: Leveraging Natural Language Processing and Bing GPT Data Augmentation for Enhanced Predictive Insights
Published in Unpublished, 2022
This study ex- plores the application of Natural Language Processing (NLP) techniques, specifically leveraging transformer models, to predict the impact of news articles on Bitcoin prices
Recommended citation: Tytarenko, S., & Lefebo, K. T. (2023). "Forecasting Bitcoin Price Trends: Leveraging Natural Language Processing and Bing GPT Data Augmentation for Enhanced Predictive Insights." Fordham Graduate School of Arts and Sciences, New York, USA. Email: [stepantita@fordham.edu, klefebo@fordham.edu]. http://stepantita.github.io/files/CryptoBERT.pdf
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs
Published in AAAI Responsible Language Model (ReLM) Workshop, 2024
In this paper, we present a framework that allows for maintaining generalizability, and enhances the performance on the downstream task by utilizing task-specific context attribution
Recommended citation: @misc{tytarenko2024breaking, title={Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs}, author={Stepan Tytarenko and Mohammad Ruhul Amin}, year={2024}, eprint={2401.16638}, archivePrefix={arXiv}, primaryClass={cs.CL} } http://stepantita.github.io/files/SpaceModel.pdf
Is context attribution all you need to attain generalizability in non-fine-tuned transformer? A Framework for Fake News Detection in Cross dataset Evaluation Settings
Published in ICLR Reliable and Responsible Foundation Models Workshop, 2024
We propose a novel method of context attribution for the transformer model that proves to be more efficient and generalizable. We show that in an example of a fake news detection task, utilizing three distinct datasets and outperforming the baseline model in both the same dataset and cross-dataset zero-shot test.
Recommended citation: TBD http://stepantita.github.io/files/CAM_Framework.pdf
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.