REFERENCES
[1] G. Gousios, M. Pinzger, and A. v. Deursen, “An exploratory study of
the pull-based software development model,” in Proceedings of the 36th
international conference on software engineering, 2014, pp. 345–355.
[2] Y. Yu, H. Wang, V. Filkov, P. Devanbu, and B. Vasilescu, “Wait for
it: Determinants of pull request evaluation latency on github,” in 2015
IEEE/ACM 12th Working Conference on Mining Software Repositories,
2015, pp. 367–371.
[3] G. Gousios, A. Zaidman, M.-A. Storey, and A. Van Deursen, “Work
practices and challenges in pull-based development: The integrator’s
perspective,” in 2015 IEEE/ACM 37th IEEE International Conference
on Software Engineering, vol. 1. IEEE, 2015, pp. 358–368.
[4] G. Gousios, M.-A. Storey, and A. Bacchelli, “Work practices and chal-
lenges in pull-based development: the contributor’s perspective,” in 2016
IEEE/ACM 38th International Conference on Software Engineering
(ICSE). IEEE, 2016, pp. 285–296.
[5] J. Zhu, M. Zhou, and A. Mockus, “Effectiveness of code contribution:
From patch-based to pull-request-based tools,” in Proceedings of the
2016 24th ACM SIGSOFT International Symposium on Foundations of
Software Engineering, 2016, pp. 871–882.
[6] P. J. Guo, T. Zimmermann, N. Nagappan, and B. Murphy, “Characteriz-
ing and predicting which bugs get fixed: an empirical study of microsoft
windows,” in Proceedings of the 32Nd ACM/IEEE International Con-
ference on Software Engineering-Volume 1, 2010, pp. 495–504.
[7] Y. Yu, H. Wang, V. Filkov, P. Devanbu, and B. Vasilescu, “Wait for
it: Determinants of pull request evaluation latency on github,” in 2015
IEEE/ACM 12th working conference on mining software repositories.
IEEE, 2015, pp. 367–371.
[8] Y. Liu and M. Lapata, “Text summarization with pretrained encoders,”
arXiv preprint arXiv:1908.08345, 2019.
[9] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy,
V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence
pre-training for natural language generation, translation, and comprehen-
sion,” arXiv preprint arXiv:1910.13461, 2019.
[10] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena,
Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of trans-
fer learning with a unified text-to-text transformer,” arXiv preprint
arXiv:1910.10683, 2019.
[11] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training
of deep bidirectional transformers for language understanding,” arXiv
preprint arXiv:1810.04805, 2018.
[12] Z. Liu, X. Xia, C. Treude, D. Lo, and S. Li, “Automatic generation
of pull request descriptions,” in 2019 34th IEEE/ACM International
Conference on Automated Software Engineering (ASE). IEEE, 2019,
pp. 176–188.
[13] S. Chen, X. Xie, B. Yin, Y. Ji, L. Chen, and B. Xu, “Stay professional
and efficient: automatically generate titles for your bug reports,” in
2020 35th IEEE/ACM International Conference on Automated Software
Engineering (ASE). IEEE, 2020, pp. 385–397.
[14] K. Liu, G. Yang, X. Chen, and C. Yu, “Sotitle: A transformer-
based post title generation approach for stack overflow,” arXiv preprint
arXiv:2202.09789, 2022.
[15] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,”
in Text summarization branches out, 2004, pp. 74–81.
[16] M. Allahyari, S. Pouriyeh, M. Assefi, S. Safaei, E. D. Trippe, J. B.
Gutierrez, and K. Kochut, “Text summarization techniques: a brief
survey,” arXiv preprint arXiv:1707.02268, 2017.
[17] W. S. El-Kassas, C. R. Salama, A. A. Rafea, and H. K. Mohamed, “Au-
tomatic text summarization: A comprehensive survey,” Expert Systems
with Applications, vol. 165, p. 113679, 2021.
[18] I. Cachola, K. Lo, A. Cohan, and D. S. Weld, “Tldr: Extreme sum-
marization of scientific documents,” arXiv preprint arXiv:2004.15011,
2020.
[19] W. Yuan, P. Liu, and G. Neubig, “Can we automate scientific review-
ing?” arXiv preprint arXiv:2102.00176, 2021.
[20] J. Gu, Z. Lu, H. Li, and V. O. Li, “Incorporating copying mechanism in
sequence-to-sequence learning,” arXiv preprint arXiv:1603.06393, 2016.
[21] R. Paulus, C. Xiong, and R. Socher, “A deep reinforced model for
abstractive summarization,” arXiv preprint arXiv:1705.04304, 2017.
[22] A. Celikyilmaz, A. Bosselut, X. He, and Y. Choi, “Deep communicating
agents for abstractive summarization,” in Proceedings of the 2018
Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Volume 1
(Long Papers). New Orleans, Louisiana: Association for Computational
Linguistics, Jun. 2018, pp. 1662–1675. [Online]. Available: https:
//aclanthology.org/N18-1150
[23] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by
jointly learning to align and translate,” arXiv preprint arXiv:1409.0473,
2014.
[24] A. See, P. J. Liu, and C. D. Manning, “Get to the point: Summariza-
tion with pointer-generator networks,” arXiv preprint arXiv:1704.04368,
2017.
[25] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, “Self-
critical sequence training for image captioning,” in Proceedings of the
IEEE conference on computer vision and pattern recognition, 2017, pp.
7008–7024.
[26] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in
neural information processing systems, vol. 30, 2017.
[27] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis,
L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert
pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
[28] Q. Lhoest, A. Villanova del Moral, Y. Jernite, A. Thakur, P. von Platen,
S. Patil, J. Chaumond, M. Drame, J. Plu, L. Tunstall, J. Davison,
M. Šaško, G. Chhablani, B. Malik, S. Brandeis, T. Le Scao, V. Sanh,
C. Xu, N. Patry, A. McMillan-Major, P. Schmid, S. Gugger, C. Delangue,
T. Matussière, L. Debut, S. Bekman, P. Cistac, T. Goehringer, V. Mustar,
F. Lagunas, A. Rush, and T. Wolf, “Datasets: A community library for
natural language processing,” in Proceedings of the 2021 Conference
on Empirical Methods in Natural Language Processing: System
Demonstrations. Online and Punta Cana, Dominican Republic:
Association for Computational Linguistics, Nov. 2021, pp. 175–184.
[Online]. Available: https://aclanthology.org/2021.emnlp-demo.21
[29] C. van der Lee, A. Gatt, E. van Miltenburg, and E. Krahmer,
“Human evaluation of automatically generated text: Current trends
and best practice guidelines,” Computer Speech & Language, vol. 67,
p. 101151, 2021. [Online]. Available: https://www.sciencedirect.com/
science/article/pii/S088523082030084X
[30] Y. Guo, W. Qiu, Y. Wang, and T. Cohen, “Automated lay lan-
guage summarization of biomedical scientific reviews,” arXiv preprint
arXiv:2012.12573, 2020.
[31] A. Bhattacharjee, S. S. Nath, S. Zhou, D. Chakroborti, B. Roy, C. K.
Roy, and K. Schneider, “An exploratory study to find motives behind
cross-platform forks from software heritage dataset,” in Proceedings
of the 17th International Conference on Mining Software Repositories,
2020, pp. 11–15.
[32] Y. Yu, H. Wang, G. Yin, and T. Wang, “Reviewer recommendation for
pull-requests in github: What can we learn from code review and bug
assignment?” Information and Software Technology, vol. 74, pp. 204–
218, 2016.
[33] J. Jiang, Q. Wu, J. Cao, X. Xia, and L. Zhang, “Recommending tags
for pull requests in github,” Information and Software Technology, vol.
129, p. 106394, 2021.
[34] L. Moreno, G. Bavota, M. Di Penta, R. Oliveto, A. Marcus, and
G. Canfora, “Automatic generation of release notes,” in Proceedings of
the 22nd acm sigsoft international symposium on foundations of software
engineering, 2014, pp. 484–495.
[35] X. Li, H. Jiang, D. Liu, Z. Ren, and G. Li, “Unsupervised deep bug re-
port summarization,” in 2018 IEEE/ACM 26th International Conference
on Program Comprehension (ICPC). IEEE, 2018, pp. 144–14 411.
[36] S. Xu, Y. Yao, F. Xu, T. Gu, H. Tong, and J. Lu, “Commit message
generation for source code changes,” in IJCAI, 2019.
[37] D. Q. Nguyen, T. Vu, and A. T. Nguyen, “Bertweet: A pre-trained
language model for english tweets,” arXiv preprint arXiv:2005.10200,
2020.