ジャーナル論文 / Journal
  1. 有山知希, 鈴木潤, 鈴木正敏, 田中涼太, 赤間怜奈, 西田京介, "クイズコンペティションの結果分析から見た 日本語質問応答の到達点と課題", 自然言語処理 31(1), 47–78, (2024).
  2. 成田風香, 佐藤志貴, 徳久良子, 乾健太郎, "感想付きニュース雑談コーパスの構築と評価", 自然言語処理 31(3), 1015–1048, (2024).
  3. Zhang, Y., Kamigaito, H., and Okumura, M., "Bidirectional Transformer Reranker for Grammatical Error Correction", 自然言語処理 31(1), 3–46, (2024).
  4. Shing, M., Misaki, K., Bao, H., Yokoi, S., and Akiba, T., "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models", 38th Conference on Neural Information Processing Systems (NeurIPS 2024), (2024).
  5. Ishizuki, Y., Kuribayashi, T., Matsubayashi, Y., Sasano, R., and Inui, K., "To Drop or Not to Drop? Predicting Argument Ellipsis Judgments: A Case Study in Japanese", In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), (2024).
  6. 浅妻佑弥, 塙一晃, 乾健太郎, "特徴量帰属法による説明の忠実性評価に関する実証的分析", 人工知能学会論文誌 38(6), C-N22-1–9, (2023).
  7. Sasaki, S., Heinzerling, B., Suzuki, J., and Inui, K., "Examining the effect of whitening on static and contextualized word embeddings", Information Processing & Management 60(3), (2023).
  8. Mulia, I. E., Ueda, N., Miyoshi, T., Iwamoto, T., and Heidarzadeh, M., "A novel deep learning approach for typhoon-induced storm surge modeling through efficient emulation of wind and pressure fields", Sci. Rep. 13(1), 7918, (2023).
  9. Moriya, S., Shiono, D., Fujihara, R., Kishinami, Y., Kimura, S., Sone, S., Akama, R., Mattsumoto, Y., Suzuki, J., and Inui, K., "Aoba_v3 bot: A Multimodal Chatbot System Combining Rules and Various Response Generation Models", Advanced Robotics, Vol. 37, Issue 21, , (2023).
  10. INUI, K., ISHII, Y., MATSUBAYASHI, Y., INOUE, N., NAITO, S., ISOBE, Y., FUNAYAMA, H., and KIKUCHI, S., "自然言語処理×教育における説明能力 ―説明できるライティング評価技術への新しい展開―", IEICE ESS Fundamentals Review 16(4), 289–300, (2023).
  11. Funayama, H., Asazuma, Y., Matsubayashi, Y., Mizumoto, T., and Inui, K., "Reducing the Cost: Cross-Prompt Pre-finetuning for Short Answer Scoring", The 24th International Conference on Artificial Intelligence in Education (AIED2023), (2023).
  12. Choi, J., Honda, U., Watanabe, T., and Inui, K., "Explainable Natural Language Inference in the Legal Domain via Text Generation", Transactions of the Japanese Society for Artificial Intelligence 38(3), c-mb6_1-11, (2023).
  13. 佐藤志貴, 赤間怜奈, 大内啓樹, 鈴木潤, 乾健太郎, "負例を厳選した対話応答選択による対話応答生成システムの評価", 自然言語処理 29(1), 53–83, (2022).
  14. 藤井諒, 三田雅人, 阿部香央莉, 塙一晃, 森下睦, 鈴木潤, 乾健太郎, "機械翻訳モデルの頑健性評価に向けた言語現象毎データセットの構築と分析", 自然言語処理 28(2), 450–478, (2021).
  15. Sasaki, S., Suzuki, J., and Inui, K., "Subword-Based Compact Reconstruction for Open-Vocabulary Neural Word Embeddings", Ieee-acm Transactions On Audio Speech and Language Processing 29, 3551–3564, (2021).
  16. Mim, F. S., Inoue, N., Reisert, P., Ouchi, H., and Inui, K., "Corruption Is Not All Bad: Incorporating Discourse Structure Into Pre-Training via Corruption for Essay Scoring", Ieee-acm Transactions On Audio Speech and Language Processing 29, 2202–2215, (2021).
国際会議 / Proceedings
  1. Yokoi, S., Bao, H., Kurita, H., and Shimodaira, H., "Zipfian Whitening", 38th Conference on Neural Information Processing Systems (NeurIPS 2024), (2024).
  2. Yano, K., Ito, T., and Suzuki, J., "STEP: Staged Parameter-Efficient Pre-training for Large Language Models", Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), 607–614, (2024).
  3. Sato, S., Akama, R., Suzuki, J., and Inui, K., "A Large Collection of Model-generated Contradictory Responses for Consistency-aware Dialogue Systems", Findings of the Association for Computational Linguistics ACL 2024, 16047–16062, (2024).
  4. Nozue, S., Nakano, Y., Moriya, S., Ariyama, T., Kokuta, K., Xie, S., Sato, K., Sone, S., Kamei, R., Akama, R., Matsubayashi, Y., and Sakaguchi, K., "A Multimodal Dialogue System to Lead Consensus Building with Emotion-Displaying", Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 669–673, (2024).
  5. Miura, N., Funayama, H., Kikuchi, S., Matsubayashi, Y., Iwase, Y., and Inui, K., "Japanese-English Sentence Translation Exercises Dataset for Automatic Grading", Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop (EACL SRW 2024), 266–278, (2024).
  6. Mita, M., Sakaguchi, K., Hagiwara, M., Mizumoto, T., Suzuki, J., and Inui, K., "Towards Automated Document Revision: Grammatical Error Correction, Fluency Edits, and Beyond", Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024), 251–265, (2024).
  7. Makino, M., Asazuma, Y., Sasaki, S., and Suzuki, J., "The Impact of Integration Step on Integrated Gradients", Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop (EACL SRW2024), 279–289, (2024).
  8. Kobayashi, G., Kuribayashi, T., Yokoi, S., and Inui, K., "Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Map", In Proceedings for the 12th International Conference on Learning Representations (ICLR 2024), (2024).
  9. Kamei, R., Shiono, D., Akama, R., and Suzuki, J., "Detecting Response Generation Not Requiring Factual Judgment", Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop), 116–123, (2024).
  10. Heinzerling, B., and Inui, K., "Monotonic Representation of Numeric Attributes in Language Models", Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), (2024).
  11. Dai, Heinzerling, B., and Inui, K., "Representational Analysis of Binding in Language Models", Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, (2024).
  12. 邊土名朝飛, 友松祐太, 佐々木翔大, 阿部香央莉, 乾健太郎, "多様なタスク指向対話データの収集を目的としたクラウドソーシングにおけるインストラクションの設計 クリニック予約対話を例に", 第37回(2023)人工知能学会全国大会論文集, 4xin102–4xin102, (2023).
  13. Zhang, Y., Kamigaito, H., and Okumura, M., "Bidirectional Transformer Reranker for Grammatical Error Correction", Findings of the Association for Computational Linguistics: ACL 2023, 3801–3825, (2023).
  14. Ye, M., Kuribayashi, T., Suzuki, J., Kobayashi, G., and Funayama, H., "Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism", Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing(EMNLP 2023), (2023).
  15. Tanaka, Y., Inuzuka, M., Arai, H., Takahashi, Y., Kukita, M., and Inui, K., "Who Does Not Benefit from Fact-checking Websites?", Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–17, (2023).
  16. Okano, Y., Funakoshi, K., Nagata, R., and Okumura, M., "Generating Dialog Responses with Specified Grammatical Items for Second Language Learning", Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023),, 184–194, (2023).
  17. Nagata, R., Hagiwara, M., Hanawa, K., and Mita, M., "A Report on FCG GenChal 2022: Shared Task on Feedback Comment Generation for Language Learners", Proceedings of the 16th International Natural Language Generation Conference: Generation Challenges, 45–52, (2023).
  18. Nagasawa, H., Heinzerling, B., Kokuta, K., and Inui, K., "Can LMs Store and Retrieve 1-to-N Relational Knowledge?", Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics 4, 130–138, (2023).
  19. Murakami, S., Fujita, K., Ichimura, T., Hori, T., Hori, M., Lalith, M., and Ueda, N., "Development of 3D Viscoelastic Crustal Deformation Analysis Solver with Data-Driven Method on GPU", Lecture Notes in Computer Science, (2023).
  20. Li, Y., Suzuki, J., Morishita, M., Abe, K., Tokuhisa, R., Brassard, A., and Inui, K., "Chat Translation Error Detection for Assisting Cross-lingual Communications", Proceedings of IJCNLP-AACL 2023 Student Research Workshop (IJCNLP-AACL 2023 SRW), (2023).
  21. Kurita, H., Kobayashi, G., Yokoi, S., and Inui, K., "Contrastive Learning-based Sentence Encoders Implicitly Weight Informative Words", In Findings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP2023 Findings), (2023).
  22. Kudo, K., Aoki, Y., Kuribayashi, T., Brassard, A., Yoshikawa, M., Sakaguchi, K., and Inui, K., "Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?", Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, (2023).
  23. Kobayashi, G., Kuribayashi, T., Yokoi, S., and Inui, K., "Transformer Language Models Handle Word Frequency in Prediction Head", In Findings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL2023 Findings), 4523–4535, (2023).
  24. Kavumba, P., Brassard, A., Heinzerling, B. T., and Inui, K., "Prompting explanations improves Adversarial NLI. Is this true? Yes it is {True} because {It reduces the association between superficial cues and answers}", Findings of the Association for Computational Linguistics: EACL 2023, (2023).
  25. Kamoda, G., Heinzerling, B., Sakaguchi, K., and Inui, K., "Test-time Augmentation for Factual Probing", Findings of the Association for Computational Linguistics: EMNLP 2023, (2023).
  26. Jimichi, K., Funakoshi, K., and Okumura, M., "Feedback comment generation using predicted grammatical terms", Proceedings of the 16th International Natural Language Generation Conference: Generation Challenges, 79–83, (2023).
  27. Ito, T., Yamashita, N., Kuribayashi, T., Hidaka, M., Suzuki, J., Gao, G., Jamieson, J., and Inui, K., "Use of an AI-powered Rewriting Support Software in Context with Other Tools: A Study of Non-Native English Speakers.", In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST2023) No.45, 1–13, (2023).
  28. Ito, I., Ito, T., Suzuki, J., and Inui, K., "Investigating the Effectiveness of Multiple Expert Models Collaboration", Findings of the Association for Computational Linguistics(EMNLP 2023), (2023).
  29. Coyne, S. C., "Template-guided Grammatical Error Feedback Comment Generation", Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, 94–104, (2023).
  30. Aoki, Y., Kudo, K., Kuribayashi, T., Brassard, A., Yoshikawa, M., Sakaguchi, K., and Inui, K., "Empirical Investigation of Neural Symbolic Reasoning Strategies", Findings of the Association for Computational Linguistics: EACL 2023, (2023).
  31. 芝原隆善, 大内啓樹, 山田育矢, 西田典起, 寺西裕紀, 古崎晃司, 渡辺太郎, 松本裕治, "ユーザの興味があるカテゴリに応じたNER システム構築フレームワーク", 言語処理学会第28回年次大会(NLP2022), (2022).
  32. Takahashi, Y., Kaneko, M., Mita, M., and Komachi, M., "ProQE: Proficiency-wise Quality Estimation dataset for Grammatical Error Correction", In Proceedings of the 13th International Conference on the Language Resources and Evaluation Conference (LREC 2022), 5994–6000, (2022).
  33. Suzuki, D., Takahashi, Y., Yamashita, I., Aida, T., Hirasawa, T., Nakatsuji, M., Mita, M., and Komachi, M., "Construction of a Quality Estimation Dataset for Automatic Evaluation of Japanese Grammatical Error Correction", In Proceedings of the 13th International Conference on the Language Resources and Evaluation Conference (LREC 2022), 5565–5572, (2022).
  34. Singh, K., Inoue, N., Farjana, M. S., Naitoh, S., and Inui, K., "IRAC: A Domain-specific Annotated Corpus of Implicit Reasoning in Arguments", In Proceedings of the 13th International Conference on the Language Resources and Evaluation Conference (LREC 2022), 4674–4683, (2022).
  35. Sato, T., Funayama, H., Hanawa, K., and Inui, K., "Plausibility and Faithfulness of Feature Attribution-based Explanations in Automated Short Answer Scoring", The 23rd International Conference on Artificial Intelligence in Education (AIED2022), (2022).
  36. Sato, S., Kishinami, Y., Sugiyama, H., Akama, R., Tokuhisa, R., and Suzuki, J., "Bipartite-play Dialogue Collection for Practical Automatic Evaluation of Dialogue Systems", Proceedings of the 2022 Conference on AACL-IJCNLP Student Research Workshop, (2022).
  37. Sato, S., Akama, R., Ouchi, H., Tokuhisa, R., Suzuki, J., and Inui, K., "N-best Response-based Analysis of Contradiction-awareness in Neural Response Generation Models", In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2022), 637–644, (2022).
  38. Naito, S., Sawada, S., Nakagawa, C., Inoue, N., Yamaguchi, K., Shimizu, I., Farjana , . S., Singh, K., and Inui, K., "TYPIC: A Corpus of Template-Based Diagnostic Comments on Argumentation", In Proceedings of the 13th International Conference on the Language Resources and Evaluation Conference (LREC 2022), (2022).
  39. Nagata, R., Kimura, M., and Hanawa, K., "Exploring the Capacity of a Large-scale Masked Language Model to Recognize Grammatical Errors", Findings of the Association for Computational Linguistics: ACL2022, 4107–4118, (2022).
  40. Matsumoto, Y., Heinzerling, B., Yoshikawa, M., and Inui, K., "Tracing and Manipulating Intermediate Results in Neural Math Problem Solvers", Proceedings of the 6th BlackboxNLP workshop (colocated with EMNLP 2022), (2022).
  41. Li, Y., Suzuki, J., Morishita, M., Abe, K., Tokuhisa, R., Brassard, A., and Inui, K., "Chat Translation Error Detection for Assisting Cross-lingual Communications", Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems, 88–95, (2022).
  42. Kisinami, Y., Akama, R., Sato, S., Tokuhisa, R., Suzuki, J., and Inui, K., "Target-Guided Open-Domain Conversation Planning", In Proceedings of the 29th International Conference on Computational Linguistics (COLING 2022), 660–668, (2022).
  43. Kavumba, P., Takahashi, R., and Oda, Y., "Are Prompt-based Models Clueless?", Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, (2022).
  44. Funayama, H., Sato, T., Matsubayashi, Y., Mizumoto, T., Suzuki, J., and Inui, K., "Balancing Cost and Quality: An Exploration of Human-in-the-loop Frameworks for Automated Short Answer Scoring", The 23rd International Conference on Artificial Intelligence in Education (AIED2022), (2022).
  45. Fujihara, R., Kuribayashi, T., Abe, K., Tokuhisa, R., and Inui, K., "Topicalization in Language Models: A Case Study on Japanese", In Proceedings of the 29th International Conference on Computational Linguistics (COLING 2022), 851–862, (2022).
  46. Farjana, M. S., Inoue, N., Naitoh, S., Singh, K., and Inui, K., "LPAttack: A Feasible Annotation Scheme for Capturing Logic Pattern of Attacks in Arguments", In Proceedings of the 13th International Conference on the Language Resources and Evaluation Conference (LREC 2022), (2022).
  47. Dai, Q., Heinzerling, B., and Inui, K., "Cross-stitching Text and Knowledge Graph Encoders for Distantly Supervised Relation Extraction", Proceedings of EMNLP 2022, (2022).
  48. Abe, K., Yokoi, S., Kajiwara, T., and Inui, K., "Why is sentence similarity benchmark not predictive of application-oriented task performance?", Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems, 70–78, (2022).
  49. Yanaka, H., Mineshima, K., and Inui, K., "SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics", Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 103–119, (2021).
  50. Yanaka, H., Mineshima, K., and Inui, K., "Exploring Transitivity in Neural NLI Models through Veridicality", In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL), 920–934, (2021).
  51. Singh, K., Farjana , M. S., Inoue, N., Naito, S., and Inui, K., "Exploring Methodologies for Collecting High-Quality Implicit Reasoning in Arguments", Proceedings of the 8th Workshop on Argument Mining, 57–66, (2021).
  52. Sekine, S., Nakayama, K., Matsuda, K., Sumida, A., Ando, M., Usami, Y., and Nomoto, M., "SHINRA2020-ML: Categorizing 30-language Wikipedia into fine-grained NE based on “Resource by Collaborative Contribution” scheme", Proceedings of the 3rd Conference on Automated Knowledge Base Construction, (2021).
  53. Mita, M., and Yanaka, H., "Do Grammatical Error Correction Models Realize Grammatical Generalization?", In Findings of the Association for Computational Linguistics: ACL 2021, 4554–4561, (2021).
  54. Konno, R., Kiyono, S., Matsubayashi, Y., Ouchi, H., and Inui, K., "Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution", Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 3790–3806, (2021).
  55. Kobayashi, G., Kuribayashi, T., Yokoi, S., and Inui, K., "Incorporating Residual and Normalization Layers into Analysis of Masked Language Models", In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), 4547–4568, (2021).
  56. Kiyono, S., Kobayashi, S., Suzuki, J., and Inui, K., "SHAPE : Shifted Absolute Position Embedding for Transformers", Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 3309–3321, (2021).
  57. Kavumba, P., Heinzerling, B., Brassard, A., and Inui, K., "Learning to Learn to be Right for the Right Reasons", Proceedings of the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 3890–3898, (2021).
  58. Inoue, N., Trivedi, H., Sinha, S., Balasubramanian, N., and Inui, K., "Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension", In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), 6064–6080, (2021).
  59. Heinzerling, B., and Inui, K., "Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries", Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, (2021).
  60. Dai, Q., Inoue, N., Takahashi, R., and Inui, K., "Two Training Strategies for Improving Relation Extraction over Universal Graph", In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL), 3673–3684, (2021).