{"department":[{"_id":"76"}],"publication_identifier":{"isbn":["9798400705021"]},"citation":{"short":"A.P. Shivarpatna Venkatesh, S. Sabu, J. Wang, A.M. Mir, L. Li, E. Bodden, in: Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, Association for Computing Machinery, New York, NY, USA, 2024, pp. 49–53.","chicago":"Shivarpatna Venkatesh, Ashwin Prasad, Samkutty Sabu, Jiawei Wang, Amir M. Mir, Li Li, and Eric Bodden. “TypeEvalPy: A Micro-Benchmarking Framework for Python Type Inference Tools.” In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, 49–53. ICSE-Companion 24. New York, NY, USA: Association for Computing Machinery, 2024. https://doi.org/10.1145/3639478.3640033.","apa":"Shivarpatna Venkatesh, A. P., Sabu, S., Wang, J., Mir, A. M., Li, L., & Bodden, E. (2024). TypeEvalPy: A Micro-benchmarking Framework for Python Type Inference Tools. Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, 49–53. https://doi.org/10.1145/3639478.3640033","ieee":"A. P. Shivarpatna Venkatesh, S. Sabu, J. Wang, A. M. Mir, L. Li, and E. Bodden, “TypeEvalPy: A Micro-benchmarking Framework for Python Type Inference Tools,” in Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, Lisbon, Portugal, 2024, pp. 49–53, doi: 10.1145/3639478.3640033.","bibtex":"@inproceedings{Shivarpatna Venkatesh_Sabu_Wang_Mir_Li_Bodden_2024, place={New York, NY, USA}, series={ICSE-Companion 24}, title={TypeEvalPy: A Micro-benchmarking Framework for Python Type Inference Tools}, DOI={10.1145/3639478.3640033}, booktitle={Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings}, publisher={Association for Computing Machinery}, author={Shivarpatna Venkatesh, Ashwin Prasad and Sabu, Samkutty and Wang, Jiawei and Mir, Amir M. and Li, Li and Bodden, Eric}, year={2024}, pages={49–53}, collection={ICSE-Companion 24} }","mla":"Shivarpatna Venkatesh, Ashwin Prasad, et al. “TypeEvalPy: A Micro-Benchmarking Framework for Python Type Inference Tools.” Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, Association for Computing Machinery, 2024, pp. 49–53, doi:10.1145/3639478.3640033.","ama":"Shivarpatna Venkatesh AP, Sabu S, Wang J, Mir AM, Li L, Bodden E. TypeEvalPy: A Micro-benchmarking Framework for Python Type Inference Tools. In: Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings. ICSE-Companion 24. Association for Computing Machinery; 2024:49-53. doi:10.1145/3639478.3640033"},"publication":"Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings","author":[{"id":"66637","first_name":"Ashwin Prasad","last_name":"Shivarpatna Venkatesh","full_name":"Shivarpatna Venkatesh, Ashwin Prasad"},{"first_name":"Samkutty","full_name":"Sabu, Samkutty","last_name":"Sabu"},{"first_name":"Jiawei","last_name":"Wang","full_name":"Wang, Jiawei"},{"last_name":"Mir","full_name":"Mir, Amir M.","first_name":"Amir M."},{"last_name":"Li","full_name":"Li, Li","first_name":"Li"},{"first_name":"Eric","id":"59256","orcid":"0000-0003-3470-3647","full_name":"Bodden, Eric","last_name":"Bodden"}],"user_id":"15249","year":"2024","conference":{"location":"Lisbon, Portugal"},"publisher":"Association for Computing Machinery","_id":"53959","title":"TypeEvalPy: A Micro-benchmarking Framework for Python Type Inference Tools","language":[{"iso":"eng"}],"type":"conference","page":"49-53","date_updated":"2024-08-05T07:49:33Z","doi":"10.1145/3639478.3640033","place":"New York, NY, USA","external_id":{"arxiv":["2312.16882"]},"series_title":"ICSE-Companion 24","date_created":"2024-05-06T11:49:22Z","status":"public","abstract":[{"lang":"eng","text":"In light of the growing interest in type inference research for Python, both researchers and practitioners require a standardized process to assess the performance of various type inference techniques. This paper introduces TypeEvalPy, a comprehensive micro-benchmarking framework for evaluating type inference tools. TypeEvalPy contains 154 code snippets with 845 type annotations across 18 categories that target various Python features. The framework manages the execution of containerized tools, transforms inferred types into a standardized format, and produces meaningful metrics for assessment. Through our analysis, we compare the performance of six type inference tools, highlighting their strengths and limitations. Our findings provide a foundation for further research and optimization in the domain of Python type inference."}]}