ジャーナル論文 / Journal
  1. Zhao, T., Li, G., Zhao, T., Chen, Y., Xie, N., Niu, G., and Sugiyama, M., "Learning explainable task-relevant state representation for model-free deep reinforcement learning", Neural Netw. 180, 106741, (2024).
  2. Zhang, J., Song, B., Wang, H., Han, B., Liu, T., Liu, L., and Sugiyama, M., "BadLabel: A Robust Perspective on Evaluating and Enhancing Label-Noise Learning", IEEE Trans. Pattern Anal. Machine Intell. PP(99), 1–12, (2024).
  3. Riou, C., Honda, J., and Sugiyam, M., "The Survival Bandit Problem", Transactions on Machine Learning Rese, 1–66, (2024).
  4. Luo, W., Chen, S., Liu, T., Han, B., Niu, G., Sugiyama, M., Tao, D., and Gong, C., "Estimating Per-Class Statistics for Label Noise Learning", IEEE Trans. Pattern Anal. Machine Intell. PP(99), 1–17, (2024).
  5. Hasegawa, N., Sugiyama, M., and Igarashi, K., "Random forest machine-learning algorithm classifies white- and brown-rot fungi according to the number of the genes encoding Carbohydrate-Active enZyme families", Appl. Environ. Microbiol. 90(7), e00482–24, (2024).
  6. Hasegawa, N., Sugiyama, M., and Igarashi, K., "Acetylxylan esterase is the key to the host specialization of wood-decay fungi predicted by random forest machine-learning algorithm", Journal of Wood Science 70(44), (2024).
  7. Gao, Y., Wu, D., Zhang, J., Gan, G., Xia, S., Niu, G., and Sugiyam, M., "On the effectiveness of adversarial training against backdoor attacks", IEEE Trans. Neural Netw. Learn. Syst. 35(10), 14878 –14888, (2024).
  8. Zhao, T., Wu, S., Li, G., Chen, Y., Niu, G., and Sugiyama, M., "Learning Intention-Aware Policies in Deep Reinforcement Learning", Neural Comput. 35(10), 1657–1677, (2023).
  9. Zhao, T., Wang, Y., Sun, W., Chen, Y., Niu, G., and Sugiyama, M., "Representation learning for continuous action spaces is beneficial for efficient policy learning", Neural Netw. 159, 137–152, (2023).
  10. Yang, S., Wu, S., Yang, E., Han, B., Liu, Y., Xu, M., Niu, G., and Liu, T., "A Parametrical Model for Instance-Dependent Label Noise", IEEE Trans. Pattern Anal. Machine Intell. 45(12), 14055–14068, (2023).
  11. Wu, Z., Lyu, J., and Sugiyama, M., "Learning With Proper Partial Labels", Neural Comput. 35(1), 58–81, (2023).
  12. Sugiyama, M., "IBISML研究会の現状とこれから", IEICE Information and Systems Society Journal 27(4), 8–9, (2023).
  13. Otsubo, Y., Otani, N., Chikasue, M., Nishino, M., and Sugiyama, M., "Root cause estimation of faults in production processes: a novel approach inspired by approximate Bayesian computation", Int. J. Prod. Res. 61(5), 1556–1574, (2023).
  14. Osa, T., Osajima, N., Aizawa, M., and Harada, T., "Learning Adaptive Policies for Autonomous Excavation Under Various Soil Conditions by Adversarial Domain Sampling", IEEE Robot. Autom. Lett. 8(9), 5536–5543, (2023).
  15. Nakajima, S., and Sugiyama, M., "Positive-unlabeled classification under class-prior shift: a prior-invariant approach based on density ratio estimation", Mach. Learn. 112, 889–919, (2023).
  16. Lv, J., Liu, B., Feng, L., Xu, N., Xu, M., An, B., Niu, G., Geng, X., and Sugiyama, M., "On the Robustness of Average Losses for Partial-Label Learning", IEEE Trans. Pattern Anal. Machine Intell. PP(99), 1–15, (2023).
  17. Gong, C., Ding, Y., Han, B., Niu, G., Yang, J., You, J. J., Tao, D., and Sugiyama, M., "Class-Wise Denoising for Robust Learning under Label Noise", IEEE Trans. Pattern Anal. Machine Intell. 45(3), 2835–2848, (2023).
  18. Gao, Y., Wu, D., Zhang, J., Gan, G., Xia, S., Niu, G., and Sugiyama, M., "On the Effectiveness of Adversarial Training Against Backdoor Attacks", IEEE Trans. Neural Netw. Learn. Syst. PP(99), 1–11, (2023).
  19. Chen, S., Gong, C., Li, X., Yang, J., Niu, G., and Sugiyama, M., "Boundary-restricted metric learning", Mach. Learn. 112(12), 4723–4762, (2023).
  20. Zhang, J., Xu, X., Han, B., Liu, T., Cui, L., Niu, G., and Sugiyama, M., "NoiLin: Improving adversarial training and correcting stereotype of noisy labels", Transactions on Machine Learning Research, 1–25, (2022).
  21. Wu, S., Liu, T., Han, B., Yu, J., Niu, G., and Sugiyama, M., "Learning from noisy pairwise similarity and unlabeled data", J. Mach. Learn. Res. 23(307), 1–34, (2022).
  22. Wang, Z., Jiang, J., Han, B., Feng, L., An, B., Niu, G., and Long, G., "SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning", Transactions on Machine Learning Research, (2022).
  23. Tanimoto, A., Yamada, S., Takenouchi, T., Sugiyama, M., and Kashima, H., "Improving imbalanced classification using near-miss instances", Expert Syst. Appl. 201(117130), 1–15, (2022).
  24. Pan, Y., Tsang, I. W., Chen, W., Niu, G., and Sugiyama, M., "Fast and Robust Rank Aggregation against Model Misspecification", J. Mach. Learn. Res. 23(23), 1–35, (2022).
  25. Osa, T., and Aizawa, M., "Deep Reinforcement Learning with Adversarial Training for Automated Excavation Using Depth Images", IEEE Access 10, 4523–4535, (2022).
  26. Osa, T., Tangkaratt, V., and Sugiyama, M., "Discovering diverse solutions in deep reinforcement learning by maximizing state–action-based mutual information", Neural Netw. 152, 90–104, (2022).
  27. Ohnishi, M., Ishikawa, I., Kuroki, Y., and Ikeda, M., "Dynamic Structure Estimation from Bandit Feedback.", CoRR abs/2206.00861, (2022).
  28. Matsuo, Y., LeCun, Y., Sahani, M., Precup, D., Silver, D., Sugiyama, M., Uchibe, E., and Morimoto, J., "Deep learning, reinforcement learning, and world models", Neural Netw. 152, 267–275, (2022).
  29. Lu, Z., Xu, C., Du, B., Ishida, T., Zhang, L., and Sugiyama, M., "LocalDrop: A hybrid regularization for deep neural networks", IEEE Trans. Pattern Anal. Machine Intell. 44(7), 3590-–3601, (2022).
  30. Ishiguro, H., Ishida, T., and Sugiyama, M., "Learning from Noisy Complementary Labels with Robust Loss Functions", IEICE Trans. Inf. Syst. E105-D(2), 364–376, (2022).
  31. Gong, C., Yang, J., You, J., and Sugiyama, M., "Centroid Estimation With Guaranteed Efficiency: A General Framework for Weakly Supervised Learning", IEEE Trans. Pattern Anal. Machine Intell. 44(6), 2841–2855, (2022).
  32. Zhang, T., Yamane, I., Lu, N., and Sugiyama, M., "A one-step approach to covariate shift adaptation", SN Computer Science 2(4), (2021).
  33. Xu, W., Niu, G., Hyvarinen, A., and Sugiyama, M., "Direction Matters: On Influence-Preserving Graph Summarization and Max-Cut Principle for Directed Graphs", Neural Comput. 33(8), 2128–2162, (2021).
  34. Xie, Z., He, F., Fu, S., Sato, I., Tao, D., and Sugiyama, M., "Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting", Neural Comput. 33(8), 2163–2192, (2021).
  35. Ugawa, M., Kawamura, Y., Toda, K., Teranishi, K., Morita, H., Adachi, H., Tamoto, R., Nomaru, H., Nakagawa, K., Sugimoto, K., Borisova, E., An, Y., Konishi, Y., Tabata, S., Morishita, S., Imai, M., Takaku, T., Araki, M., Komatsu, N., Hayashi, Y., Sato, I., Horisaki, R., Noji, H., and Ota, S., "In silico-labeled ghost cytometry", eLife 10, (2021).
  36. Tsuchiya, T., Charoenphakdee, N., Sato, I., and Sugiyama, M., "Semisupervised Ordinal Regression Based on Empirical Risk Minimization", Neural Comput. 33(12), 3361–3412, (2021).
  37. Shimada, T., Bao, H., Sato, I., and Sugiyama, M., "Classification From Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization", Neural Comput. 33(5), 1234–1268, (2021).
  38. Ohnishi, M., Notomista, G., Sugiyama, M., and Egerstedt, M., "Constraint learning for control tasks with limited duration barrier functions", Automatica 127, (2021).
  39. Fujisawa, M., and Sato, I., "Multilevel Monte Carlo Variational Inference", J. Mach. Learn. Res. 22(278), 1–44, (2021).
国際会議 / Proceedings
  1. Zhu, H., Soen, A., Cheung, Y. K., and Xie, L., "Online Learning in Betting Markets: Profit versus Prediction", Proceedings of the 41 st International Conference on Machine Learning, (2024).
  2. Zhang, Z., Han, S., Yao, H., Niu, G., and Sugiyama, M., "Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought", Proceedings of 41st International Conference on Machine Learning (ICML2024) 235, 58967–58983, (2024).
  3. Yan, K., Cui, S., Wuerkaixi, A., Zhang, J., Han , B., Niu, G., Sugiyama, M., and Zhang, C., "Balancing similarity and complementarity for unimodal and multimodal federated learning", Proceedings of 41st International Conference on Machine Learning (ICML2024) 235, 55739–55758, (2024).
  4. Xie, M., Xiao, J., Peng, P., Niu, G., Sugiyama, M., and Huang, S., "Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training", Proceedings of 41st International Conference on Machine Learning (ICML2024) 235, 54576–54589, (2024).
  5. Wuerkaixi, A., Cui, S., Zhang, J., Yan, K., Han, B., Niu, G., Fang , L., Zhang, C., and Sugiyama, M., "Accurate Forgetting for Heterogeneous Federated Continual Learning", Proceedings of Twelfth International Conference on Learning Representations (ICLR2024), 1–19, (2024).
  6. Wang, W., Ishida, T., Zhan, Y., Niu, G., and Sugiyama, M., "Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical", Proceedings of 41st International Conference on Machine Learning (ICML2024) 235, 50683–50710, (2024).
  7. Tanaka, Y., Yoshida, S. M., Shibata, T., Terao, M., Okatani, T., and Sugiyama, M., "Appearance-based curriculum for semi-supervised learning with multi-angle unlabeled data", the IEEE Winter Conference on Applications of Computer Vision (WACV2024), 2780–2789, (2024).
  8. Qian, Y., Zhao, P., Zhang, Y., Sugiyama, M., and Zhou, Z., "Efficient Non-stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation", Proceedings of 41st International Conference on Machine Learning (ICML2024) 235, 41383–41415, (2024).
  9. Omura, M., Osa, T., Mukuta, Y., and Harada, T., "Symmetric Q-Learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning", 38th AAAI Conference on Artificial Intelligence, (2024).
  10. Nakamura, S., and Sugiyama, M., "Thompson sampling for real-valued combinatorial pure exploration of multi-armed bandit", the Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI2024), 14414–14421, (2024).
  11. Nakamura, S., and Sugiyama, M., "Fixed-budget real-valued combinatorial pure exploration of multi-armed bandit", Proceedings of 27th International Conference on Artificial Intelligence and Statistics (AISTATS2024) 238, 1225–1233, (2024).
  12. Lee, J., Chiang, C., and Sugiyama, M., "The choice of noninformative priors for Thompson sampling in multiparameter bandit models", the Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI2024) 13383(13390), (2024).
  13. Johannes, ., Osa, T., and Sugiyam, M., "Offline reinforcement learning from datasets with structured non-stationarity", Reinforcement Learning Journal 5, 2140–2161, (2024).
  14. Fan, Z., Hu, S., Yao, J., Niu, G., Zhang, Y., Sugiyam, M., and Wang, Y., "Locally estimated global perturbations is better than local perturbations for federated sharpness-aware minimization", Proceedings of the 41st International Conference on Machine Learning 235, 12858–12881, (2024).
  15. Don, Q., Kaneko, T., and Sugiyama, M., "An offline learning of behavior correction policy for vision-based robotic manipulation", Proceedings of 2024 IEEE International Conference on Robotics and Automation (ICRA2024), 5448–5454, (2024).
  16. Chen, S., Niu, G., Gong, C., Koc, O., Yang, J., and Sugiyama, M., "Robust similarity learning with difference alignment regularization", Proceedings of Twelfth International Conference on Learning Representations (ICLR2024), 1–22, (2024).
  17. Chen, H., Wang, J., Shah, A., Tao, ., Wei, H., Xie, X., Sugiyama, M., and Raj, B., "Understanding and mitigating the label noise in pre-training on downstream tasks", Proceedings of Twelfth International Conference on Learning Representations (ICLR2024), 1–31, (2024).
  18. Chen, H., Wang, J., Feng, L., Li, X., Wang, Y., Xie, X., Sugiyam, M., Singh, R., and Raj, B., "A general framework for learning from weak supervision", Proceedings of Machine Learning Research 235, 7462–7485, (2024).
  19. Braun, G., and Sugiyama, M., "VEC-SBM: Optimal community detection with vectorial edges covariates", Proceedings of 27th International Conference on Artificial Intelligence and Statistics (AISTATS2024) 238, 532–540, (2024).
  20. Zhu, J., Yu, G., Yao, J., Liu, T., Niu, G., Sugiyama, M., and Han, B., "Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation", Advances in Neural Information Processing Systems 36, 22702–22734, (2023).
  21. Zhang, Y., and Sugiyama, M., "Online (multinomial) logistic bandit: Improved regret and constant computation cost", Advances in Neural Information Processing Systems 36, 29741–29782, (2023).
  22. Zhang, Y., and Sugiyama, M., "A Category-theoretical Meta-analysis of Definitions of Disentanglement", Proceedings of Machine Learning Research 202, 41596–41612, (2023).
  23. Zhang, Y., Zhang, Z., Zhao, P., and Sugiyama, M., "Adapting to continuous covariate shift via online density ratio estimation", Advances in Neural Information Processing Systems 36, 29074–29113, (2023).
  24. Yang, P., Xie, M., Zong, C., Feng, L., Niu, G., Sugiyama, M., and Huang, S., "Multi-label knowledge distillation", Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV2023), 17271–17280, (2023).
  25. Xu, X., Zhang, J., Liu, F., Sugiyama, M., and Kankanhalli, M., "Enhancing adversarial contrastive learning via adversarial invariant regularization", Advances in Neural Information Processing Systems 36, 16783–16803, (2023).
  26. Xu, X., Zhang, J., Liu, F., Sugiyama, M., and Kankanhall, M., "Efficient adversarial contrastive learning via robustness-aware coreset selection", Advances in Neural Information Processing Systems 36, 75798–75825, (2023).
  27. Xu, J., Chen, S., Ren, Y., Shi, X., Shen, H., Niu, G., and Zhu, X., "Self-Weighted Contrastive Learning among Multiple Views for Mitigating Representation Degeneration.", NeurIPS, (2023).
  28. Xie, Z., Xu, Z., Zhang, J., Sato, I., and Masashi, S., "On the overlooked pitfalls of weight decay and how to mitigate them: A gradient-norm perspective", Advances in Neural Information Processing Systems 36, 1208–1228, (2023).
  29. Xie, M., Xiao, J., Liu, H., Niu, G., Sugiyama, M., and Huang, S., "Class-distribution-aware pseudo-labeling for semi-supervised multi-label learning", Advances in Neural Information Processing Systems 36, 25731–25747, (2023).
  30. Xia, S., Lv, J., Xu, N., Niu, G., and Geng, X., "Towards Effective Visual Representations for Partial-Label Learning", 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 00, 15589–15598, (2023).
  31. Wei, Z., Feng, L., Han, B., Liu, T., Niu, G., Zhu, X., and Shen, H. T., "A Universal Unbiased Method for Classification from Aggregate Observations.", Icml, 36804–36820, (2023).
  32. Wei, H., Zhuang, H., Xie, R., Feng, L., Niu, G., An, B., and Li, Y., "Mitigating Memorization of Noisy Labels by Clipping the Model Prediction.", Icml, 36868–36886, (2023).
  33. Wang, W., Feng, L., Jiang, Y., Niu, G., Zhang, M., and Sugiyama, M., "Binary Classification with Confidence Difference", Advances in Neural Information Processing Systems 36, 5936–5960, (2023).
  34. Tang, J., Chen, S., Niu, G., Sugiyama, M., and Gong, C., "Distribution Shift Matters for Knowledge Distillation with Webly Collected Images", Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV2023), 17470–17480, (2023).
  35. Lee, J., Honda, J., and Sugiyama, M., "Thompson exploration with best challenger rule in best arm identification", Proceedings of the 15th Asian Conference on Machine Learning (ACML2023), 646–661, (2023).
  36. Lee, J., Honda, J., Chiang, C. K., and Sugiyama, M., "Optimality of Thompson sampling with noninformative priors for Pareto bandits", Proceedings of 40th International Conference on Machine Learning (ICML2023), Proceedings of Machine Learning Research 202, 18810–18851, (2023).
  37. Ishida, T., Yamane, I., Charoenphakdee, N., Niu, G., and Sugiyama, M., "Is the performance of my deep network too good to be true? A direct approach to estimating the Bayes error in binary classification", In Proceedings of Eleventh International Conference on Learning Representations (ICLR2023), (2023).
  38. Ghamiz, S., Zhang, J., Cordy, M., Papadakis, M., Sugiyama, M., and Traon, L. Y., "GAT: Guided adversarial training with Pareto-optimal auxiliary tasks", Proceedings of 40th International Conference on Machine Learning (ICML2023), Proceedings of Machine Learning Research 202, 11282–11255, (2023).
  39. Futami, F., and Fujisawa, M., "Time-Independent Information-Theoretic Generalization Bounds for SGLD", Advances in Neural Information Processing Systems 36 (NeurIPS 2023) 36, 8173–8185, (2023).
  40. Fang, T., Lu, N., Niu, G., and Sugiyama, M., "Generalizing Importance Weighting to A Universal Solver for Distribution Shift Problems", Advances in Neural Information Processing Systems 36, 24171–24190, (2023).
  41. Dong, R., Liu, F., Chi, H., Liu, T., Gong, M., Niu, G., Sugiyama, M., and Han, B., "Diversity-enhancing generative network for few-shot hypothesis adaptation", Proceedings of 40th International Conference on Machine Learning (ICML2023), Proceedings of Machine Learning Research 202, 8260–8275, (2023).
  42. Cai, X., Zhang, Y., Chiang, C., and Sugiyama, M., "Imitation learning from vague feedback", Advances in Neural Information Processing Systems 36, 48275–48292, (2023).
  43. Ca, X., Zhang, P., Zhao, L., Bian, J., Sugiyama, M., and Llorens, A., "Distributional Pareto-Optimal Multi-Objective Reinforcement Learning", Advances in Neural Information Processing Systems 36, 15593–15613, (2023).
  44. Ca, X. Q., Ding, Y. X., Chen, Z. X., Jiang, Y., Sugiyama, M., and Zhou, Z. H., "Seeing differently, acting similarly: Heterogeneously observable imitation learning", Proceedings of Eleventh International Conference on Learning Representations (ICLR2023), (2023).
  45. Zhu, J., Yao, J., Han, B., Zhang, J., Liu, T., Niu, G., Zhou, J., Xu, J., and Yang, H., "Reliable Adversarial Distillation with Unreliable Teachers", Proceedings of 10th International Conference on Learning Representations (ICLR 2022), (2022).
  46. Zhou, J., Zhou, J., Zhang, J., Liu, T., Niu, G., Han, B., and Sugiyama, M., "Adversarial training with complementary labels: On the benefit of gradually informative attacks", Advances in Neural Information Processing Systems, 23621–23633, (2022).
  47. Zhang, Y., Gong, M., Liu, T., Niu, G., Tian, X., Han, B., Schölkopf, B., and Zhang, K., "CausalAdv: Adversarial Robustness Through the Lens of Causality", Proceedings of 10th International Conference on Learning Representations (ICLR 2022), (2022).
  48. Zhang, F., Feng, L., Han, B., Liu, T., Niu, G., Qin, T., and Sugiyama, M., "Exploiting Class Activation Value for Partial-Label Learning", Proceedings of Tenth International Conference on Learning Representations (ICLR2022), 1–17, (2022).
  49. Yao, Y., Liu, T., Han, B., Gong, M., Niu, G., Sugiyama, M., and Tao, D., "Rethinking class-prior estimation for positive-unlabeled learning", Proceedings of Tenth International Conference on Learning Representations (ICLR2022), 1–12, (2022).
  50. Yang, S., Yang, E., Han, B., Liu, Y., Xu, M., Niu, G., and Liu, T., "Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network", Proceedings of 39th International Conference on Machine Learning (ICML 2022), (2022).
  51. Yan, H., Zhang, J., Feng, J., Sugiyama, M., and Tan, V. Y., "Towards Adversarially Robust Deep Image Denoising", Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), 1516–1522, (2022).
  52. Xu, X., Zhang, J., Liu, F., Sugiyama, M., and Kankanhalli, M., "Adversarial attacks and defenses for non-parametric two-sample tests", Proceedings of Machine Learning Research 24743(24769), (2022).
  53. Xu, N., Qiao, C., Lyu, J., Geng, X., and Zhangg, M., "One Positive Label is Sufficient: Single-Positive Multi-Label Learning with Label Enhancement", Advances in Neural Information Processing Systems 35 (NeurIPS 2022), (2022).
  54. Xie, Z., Wang, X., Zhang, H., Sato, I., and Sugiyama, M., "Adaptive inertia: Disentangling the effects of adaptive learning rate and momentum", Proceedings of Machine Learning Research, 24430–24459, (2022).
  55. Xia, X., Liu, T., Han, B., Gong, M., Yu, J., Niu, G., and Sugiyama, M., "Sample selection with uncertainty of losses for learning with noisy labels", Proceedings of Tenth International Conference on Learning Representations (ICLR2022), (2022).
  56. Xia, S., Lv, J., Xu, N., and Geng, X., "Ambiguity-Induced Contrastive Learning for Instance-Dependent Partial Label Learning", Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 3615–3621, (2022).
  57. Wei, J., Zhu, Z., Cheng, H., Liu, T., Niu, G., and Liu, Y., "Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations", Proceedings of 10th International Conference on Learning Representations (ICLR 2022), (2022).
  58. Wei, J., Liu, H., Liu, T., Niu, G., Sugiyama, M., and Liu, Y., "To smooth or not? When label smoothing meets noisy labels", Proceedings of Machine Learning Research 162, 23589–23614, (2022).
  59. Wang, H., Xiao, R., Li, Y., Feng, L., Niu, G., Chen, G., and Zhao, J., "PiCO: Contrastive Label Disambiguation for Partial Label Learning", Proceedings of 10th International Conference on Learning Representations (ICLR 2022), (2022).
  60. Tang, Y., Lu, N., Zhang, T., and Sugiyama, M., "Multi-class classification from multiple unlabeled datasets with partial risk regularization", Proceedings of Machine Learning Research, 1–16, (2022).
  61. Sugiyama, M., Liu, T., Han, B., Liu, Y., and Niu, G., "Learning and mining with noisy labels", Proceedings of the 31st ACM International Conference on Information & Knowledge Management (CIKM2022), 5152–5155, (2022).
  62. Nakamura, S., Bao, H., and Sugiyama, M., "Robust computation of optimal transport by β-potential regularization", Proceedings of Machine Learning Research, 1–26, (2022).
  63. Lu, N., Wang, Z., Li, X., Niu, G., Dou, Q., and Sugiyama, M., "Federated learning from only unlabeled data with class-conditional-sharing clients", Proceedings of Tenth International Conference on Learning Representations (ICLR2022), 1–22, (2022).
  64. Gao, R., Wang, J., Zhou, K., Liu, F., Xie, B., Niu, G., Han, B., and Cheng, J., "Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack", Proceedings of 39th International Conference on Machine Learning (ICML 2022), (2022).
  65. Cui, S., Zhang, J., Liang, J., Han, B., Sugiyama, M., and Zhang, C., "Synergy-of-experts: Collaborate to improve adversarial robustness", Advances in Neural Information Processing Systems, 32552–32567, (2022).
  66. Chi, H., Liu, F., Yang, W., Lan, L., Liu, T., Niu, G., and Han, B., "Meta discovery: Learning to discover novel classes given very limited data", Proceedings of Tenth International Conference on Learning Representations (ICLR2022), 25–29, (2022).
  67. Cheng, D., Liu, T., Ning, Y., Wang, N., Han, B., Niu, G., Gao, X., and Sugiyama, M., "Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation", Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2022), 16630–16639, (2022).
  68. Chen, S., Gong, C., Li, J., Yang, J., Niu, G., and Sugiyama, M., "Learning contrastive embedding in low-dimensional space", Advances in Neural Information Processing Systems, 6345–6357, (2022).
  69. Cao, Y., Cai, T., Feng, L., Gu, L., GU, ., An, B., Niu, G., and Sugiyama, M., "Generalizing consistent multi-class classification with rejection to be compatible with arbitrary losses", Advances in Neural Information Processing Systems, 521–534, (2022).
  70. Bao, H., Shimada, T., Xu, L., Sato, I., and Sugiyama, M., "Pairwise Supervision Can Provably Elicit a Decision Boundary", Proceedings of 25th International Conference on Artificial Intelligence and Statistics (AISTATS2022), 2618–2640, (2022).
  71. Bai, Y., Zhang, Y., Zhao, P., Sugiyama, M., and Zhou, Z., "Adapting to online label shift with provable guarantees", Advances in Neural Information Processing Systems, 29960–29974, (2022).
  72. Zhang, Y., Niu, G., and Sugiyama, M., "Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization", Proceedings of Machine Learning Research 139, 12501–12512, (2021).
  73. Zhang, J., Zhu, J., Niu, G., Han, B., Sugiyama, M., and Kankanhalli, M., "Geometry-aware instance-reweighted adversarial training", Proceedings of Ninth International Conference on Learning Representations (ICLR2021), (2021).
  74. Zhang, J., Xu, C., Li, J., Chen, W., Wang, Y., Tai, Y., Chen, S., Wang, C., Huang, F., and Liu, Y., "Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model", Advance in Neural Information Processing System, (2021).
  75. Yoshida, S. M., Takenouchi, T., and Sugiyama, M., "Lower-Bounded Proper Losses for Weakly Supervised Classification", Proceedings of Machine Learning Research 139, 12110–12120, (2021).
  76. Yao, Y., Liu, T., Gong, M., Han, B., Niu, G., and Zhang, K., "Instance-dependent Label-noise Learning under a Structural Causal Model.", NeurIPS, (2021).
  77. Yan, H., Zhang, J., Niu, G., Feng, J., Tan, V. Y., and Sugiyama, M., "CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection", Proceedings of Machine Learning Research 139, 11693–11703, (2021).
  78. Yamane, I., Honda, J., Yger, F., and Sugiyama, M., "Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences", Proceedings of Machine Learning Research 139, 11637–11647, (2021).
  79. Xie, Z., Yuan, L., Zhu, Z., and Sugiyama, M., "Positive-negative momentum: Manipulating stochastic gradient noise to improve generalization", Proceedings of Machine Learning Research 139, 11448–11458, (2021).
  80. Xie, Z., Sato, I., and Sugiyama, M., "A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima", Proceedings of Ninth International Conference on Learning Representations (ICLR2021), (2021).
  81. Wu, S., Xia, X., Liu, T., Han, B., Gong, M., Wang, N., Liu, H., and Niu, G., "Class2Simi - A Noise Reduction Perspective on Learning with Noisy Labels.", Icml, 11285–11295, (2021).
  82. Wang, Q., Liu, F., Han, B., Liu, T., Gong, C., Niu, G., Zhou, M., and Sugiyama, M., "Probabilistic margins for instance reweighting in adversarial training", Advances in Neural Information Processing Systems 34 (NeurIPS 2021), (2021).
  83. Teshima, T., and Sugiyama, M., "Incorporating causal graphical prior knowledge into predictive modeling via simple data augmentation", Proceedings of Machine Learning Research 161, 86–89, (2021).
  84. Tao, G., Ji, X., Wang, W., Shuo, C., Lin, C., Cao, Y., Lu, T., Luo, D., and Tai, Y., "Spectrum-to-Kernel Translation for Accurate Blind Image Super-Resolution", Advance in Neural Information Processing System, (2021).
  85. Tangkaratt, V., Charoenphakdee, N., and Sugiyama, M., "Robust imitation learning from noisy demonstrations", Proceedings of Machine Learning Research 130, 298–306, (2021).
  86. Parmas, P., and Sugiyama, M., "A unified view of likelihood ratio and reparameterization gradients and an optimal importance sampling scheme", Proceedings of Machine Learning Research 130, 4078–4086, (2021).
  87. Nozawa, K., and Sato, I., "Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning", Conference on Neural Information Processing Systems, (2021).
  88. Lu, N., Lei, S., Niu, G., Sato, I., and Sugiyama, M., "Binary Classification from multiple unlabeled datasets via surrogate set classification", Proceedings of Machine Learning Research 139, 7134–7144, (2021).
  89. Li, X., Liu, T., Han, B., Niu, G., and Sugiyama, M., "Provably end-to-end label-noise learning without anchor points", Proceedings of Machine Learning Research 139, 6403–6413, (2021).
  90. Li, D., Qiu, T., Chen, S., Li, Q., and Xu, S., "Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection", Annual Computer Security Applications Conference, (2021).
  91. Jacovi, A., Niu, G., Goldberg, Y., Sugiyama, M., and , "Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning", Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL2021), 581–592, (2021).
  92. Han, Z., Fu, Z., Chen, S., and Yang, J., "Contrastive Embedding for Generalized Zero-shot Learning", IEEE Conference on Computer Vision and Pattern Recognition, (2021).
  93. Gao, R., Liu, F., Zhang, J., Han, B., Liu, T., Niu, G., and Sugiyama, M., "Maximum mean discrepancy is aware of adversarial attacks", Proceedings of Machine Learning Research 139, 3564–3575, (2021).
  94. Futami, F., Iwata, T., Ueda, N., Sato, I., and Sugiyama, M., "Loss function based second-order Jensen inequality and its application to particle variational inference", Advances in Neural Information Processing Systems 34, 6803–6815, (2021).
  95. Fujisawa, M., Teshima, T., Sato, I., and Sugiyama, M., "γ-ABC: Outlier-robust approximate Bayesian computation based on a robust divergence estimator", Proceedings of Machine Learning Research 130, 1783–1791, (2021).
  96. Feng, L., Shu, S., Lu, N., Han, B., Xu, M., Niu, G., An, B., and Sugiyama, M., "Pointwise binary classification with pairwise confidence comparisons", Proceedings of Machine Learning Research 139, 3252–3262, (2021).
  97. Feng, L., Shu, S., Cao, Y., Tao, L., Wei, H., Xiang, T., An, B., and Niu, G., "Multiple-Instance Learning from Similar and Dissimilar Bags.", Kdd, 374–382, (2021).
  98. Du, X., Zhang, J., Han, B., Liu, T., Rong, Y., Niu, G., Huang, J., and Sugiyama, M., "Learning diverse-structured networks for adversarial robustness", Proceedings of Machine Learning Research 139, 2880–2891, (2021).
  99. Dan, S., Bao, H., and Sugiyama, M., "Learning from noisy similar and dissimilar data", Lecture Notes in Comput. Sci. 12976, 233–249, (2021).
  100. Chen, S., Niu, G., Gong, C., Li, J., Yang, J., and Sugiyama, M., "Large-margin contrastive learning with distance polarization regularizer", Proceedings of Machine Learning Research 139, 1673–1683, (2021).
  101. Charoenphakdee, N., Vongkulbhisal, J., Chairatanakul, N., and Sugiyama, M., "On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective", Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2021), 5202–5211, (2021).
  102. Charoenphakdee, N., Cui, Z., Zhang, Y., and Sugiyma, M., "Classification with rejection based on cost-sensitive classification", Proceedings of Machine Learning Research 139, 1507–1517, (2021).
  103. Cao, Y., Feng, L., Xu, Y., An, B., Niu, G., and Sugiyama, M., "Learning from Similarity-Confidence Data", Proceedings of Machine Learning Research 139, 1272–1282, (2021).
  104. Berthon, A., Han, B., Liu, T., Niu, G., and Sugiyama, M., "Confidence scores make instance-dependent label-noise learning possible", Proceedings of Machine Learning Research 139, 825–836, (2021).
  105. Bao, H., and Sugiyama, M., "Fenchel-Young losses with skewed entropies for class-posterior probability estimation", Proceedings of Machine Learning Research 130, 1648–1656, (2021).
  106. Bai, Y., Yang, E., Han, B., Yang, Y., Li, J., Mao, Y., Niu, G., and Liu, T., "Understanding and Improving Early Stopping for Learning with Noisy Labels.", NeurIPS, (2021).
レビュー / Review
  1. Kuroki, Y., Honda, J., and Sugiyama, M., "Combinatorial pure exploration with full-bandit feedback and beyond: Solving combinatorial optimization under uncertainty with limited observation", The Fields Institute Communications Series on Data Science and Optimization, (2023).
  2. Charoenphakdee, N., Lee, J., and Sugiyama, M., "A symmetric loss perspective of reliable machine learning", The Fields Institute Communications Series on Data Science and Optimization, (2023).
  3. Lu, N., Zhang, T., Fang, T., Teshima, T., and Sugiyama, M., "Rethinking importance weighting for transfer learning", Federated and Transfer Learning 27, 185–231, (2022).
その他 / Other
  1. Koshizuka, T., Fujisawa, M., Tanaka, Y., and Sato, I., "Initialization Bias of Fourier Neural Operator: Revisiting the Edge of Chaos", arXiv, (2023).