Logic and AI

Time: 9:50-12:15 am, every Friday, Spring, 2026

Location: Room 3104, Teaching Building 3, Tsinghua Campus

Description: This seminar explores the current interfaces between logic and artificial intelligence. It will include guest lectures, presentations by invited authors on their recent work, as well as student presentations on the latest research in the field.

Contact: Fenrong Liu (fenrongATtsinghua.edu.cn); TA: (zuomj25ATmails.tsinghua.edu.cn)

Plan

References

I. Foundations of Causal Learning

For Hanti Lin's three guest lectures, please read Urns and Trees: A Minimalist Guide to Probability for Frequentist Statistics & Machine Learning PDF (approximately 7,000 words) before our first class meeting. While the text includes exercises, these are for self-study purposes only and do not need to be completed.

II. Logical Reasoning of LLMs (Thanks to Fengxiang Cheng for preparing this)

  1. Fengxiang Cheng, Haoxuan Li, Fenrong Liu, Robert van Rooij, Kun Zhang, and Zhouchen Lin,Empowering LLMs with Logical Reasoning: A Comprehensive Survey, IJCAI 2025 Proceedings. PDF
  2. Xiangyu Wang, Haocheng Yang, Fengxiang Cheng, and Fenrong Liu, Adaptive Selection of Symbolic Languages for Improving LLM Logical Reasoning, AAAI 2026 Workshop on Post-AI Formal Methods. PDF (arXiv)
  3. Theo Olausson, Alex Gu, et al.,LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers, EMNLP 2023. PDF
  4. Liangming Pan, Alon Albalak, et al.,Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning, Findings of EMNLP 2023. PDF (arXiv)
  5. Hyun Ryu, Gyeongman Kim, et al.,Divide and Translate: Compositional First-order Logic Translation and Verification for Complex Logical Reasoning, ICLR 2025. PDF
  6. Brunello, A., Geatti, L., Mignani, M., Montanari, A., and Saccomanno, N. Do LLMs Really Struggle at NL-FOL Translation? Revealing their Strengths via a Novel Benchmarking Strategy. AAAI 2026. PDF (arXiv)
  7. Jundong Xu, et al.,Faithful Logical Reasoning via Symbolic Chain-Of-Thought, ACL 2024. PDF
  8. Jundong Xu, et al.,Aristotle: Mastering Logical Reasoning with a Logic-Complete Decompose-Search-Resolve Framework, ACL 2025. PDF
  9. Jundong Xu, et al.,MuSLR: Multimodal Symbolic Logical Reasoning, NeurIPS 2025. PDF
  10. Jundong Xu, et al., LogicReward: Incentivizing LLM Reasoning via Step-Wise Logical Supervision, ICLR 2026. PDF (openreview)
  11. Terufumi Morishita, Gaku Morio, et al., Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus, NeurIPS 2024. PDF
  12. Yuxuan Wan, Wenxuan Wang, et al., LogicAsker: Evaluating and Improving the Logical Reasoning Ability of Large Language Models, EMNLP 2024. PDF
  13. Haocheng Yang, Fengxiang Cheng, Tianjun Yao, Jiajun Chai, Xiaohan Wang, Guojun Yin, Wei Lin, Mengyue Yang, Yisen Wang, Fenrong Liu, Haoxuan Li, and Soummya Kar,Enhancing Complex Symbolic Logical Reasoning of Large Language Models via Sparse Multi-Agent Debate, ICLR 2026.
  14. Zheng Chen, Chuan Zhou, Fengxiang Cheng, Yip Tin Po, Fenrong Liu, Yisen Wang, Jiajun Chai, Xiaohan Wang, Guojun Yin, Wei Lin, Haoxuan Li, Bo Li, and Zhouchen Lin, LogiConBench: Benchmarking Logical Consistencies of LLMs, ICLR 2026. PDF (openreview)
  15. Zhaozuo Liu, Zhengnan Li, Fengxiang Cheng, and Fenrong Liu, Enhancing LLMs in Legal Judgment Prediction via Neuro-Symbolic Reasoning, AAAI 2026 Workshop Language Models for Underserved Communities.
  16. Manuj Kant, et al., Equitable Access to Justice: Logical LLMs Show Promise, arXiv preprint (2024). PDF (openreview)

III. Causal Reasoning (Many thanks to Chunyuan Zheng and Haoxuan Li for preparing this, as well as those in IV on causality.)

  1. Liu, Xiaoyu, et al. Large Language Models and Causal Inference in Collaboration: A Survey. Findings of NAACL 2025. (Survey paper) PDF
  2. Tan, Juanhe TJ. Causal Abstraction for Chain-of-Thought Reasoning in Arithmetic Word Problems. ACL 2023 Workshop. (Causal understanding of chain-of-thought reasoning in large language models) PDF
  3. Stolfo, et al. A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models. ACL 2023. (Using causal frameworks to evaluate mathematical reasoning in large language models) PDF
  4. Kıcıman, Emre, et al. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. TMLR 2024. PDF (arXiv)
  5. Jin, Zhijing, et al. Can Large Language Models Infer Causation from Correlation?. ICLR 2024. PDF (arXiv)
  6. Liu, Chenxi, et al. Discovery of the Hidden World with Large Language Models. NeurIPS 2024. (Paper by Kun Zhang's research group) PDF (arXiv)
  7. Jin, Zhijing, et al. Cladder: Assessing Causal Reasoning in Language Models. NeurIPS 2023. PDF (arXiv)
  8. Liu, Xiao, et al. Are LLMs Capable of Data-Based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data. Findings of ACL 2024. PDF (arXiv)

IV. Other Useful References

  1. Xiaoyu Liu et al. Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey. March 2024.
  2. Linying Yang et al. A Critical Review of Causal Inference Benchmarks for Large Language Models. AAAI 2024 Workshop, 2023.
  3. Usman Anwar et al. Foundational Challenges in Assuring Alignment and Safety of Large Language Models. arXiv, 2024. PDF
  4. Zhijing Jin et al. CLadder: A Benchmark to Assess Causal Reasoning Capabilities of Language Models. NeurIPS 2024.
  5. Tejas Kasetty et al. Evaluating Interventional Reasoning Capabilities of Large Language Models. arXiv, 2024. PDF
  6. Kiho Park et al. The Linear Representation Hypothesis and the Geometry of Large Language Models. arXiv, 2023. PDF
  7. Atticus Geiger et al. Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations. Causal Learning and Reasoning, 2024.
  8. Allen Nie et al. MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks. NeurIPS 2024.
  9. Matej Zečević et al. Causal Parrots: Large Language Models May Talk Causality but Are Not Causal. arXiv, 2023. PDF
  10. Aniket Vashishtha et al. Causal Inference Using LLM-Guided Discovery. arXiv, 2023. PDF
  11. Patrik Reizinger et al. Understanding LLMs Requires More Than Statistical Generalization. arXiv, 2024. PDF
  12. Moritz Willig et al. Can Foundation Models Talk Causality?. arXiv, 2022. PDF
  13. Kevin Xia et al. The Causal-Neural Connection: Expressiveness, Learnability, and Inference. NeurIPS 2021.
  14. Alexander D’Amour et al. Underspecification Presents Challenges for Credibility in Modern Machine Learning. JMLR, 2022.
  15. Stephanie Long et al. Causal Discovery with Language Models as Imperfect Experts. arXiv, 2023. PDF
  16. Goutham Rajendran et al. Learning Interpretable Concepts: Unifying Causal Representation Learning and Foundation Models. arXiv, 2024. PDF
  17. Yibo Jiang et al. On the Origins of Linear Representations in Large Language Models. arXiv, 2024. PDF
  18. Zihao Wang et al. Concept Algebra for Score-Based Text-Controlled Generative Models. NeurIPS 2024.
  19. Sharut Gupta et al. Context Is Environment. ICLR 2024. PDF
  20. Andrew Lampinen et al. Passive Learning of Active Causal Strategies in Agents and Language Models. NeurIPS 2023. PDF
  21. Zhengxuan Wu et al. Interpretability at Scale: Identifying Causal Mechanisms in Alpaca. NeurIPS 2024.
  22. Emre Kıcıman et al. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. arXiv, 2023. PDF
  23. Francesco Montagna et al. Demystifying Amortized Causal Discovery with Transformers. arXiv, 2024. PDF
  24. Imant Daunhawer et al. Identifiability Results for Multimodal Contrastive Learning. ICLR 2023. PDF
  25. Pedro Sanchez and Sotirios Tsaftaris. Diffusion Causal Models for Counterfactual Estimation. CLeaR 2022. PDF
  26. Yushu Pan and Elias Bareinboim. Counterfactual Image Editing. arXiv, 2024. PDF
  27. Jingling Li et al. Steering LLMs Towards Unbiased Responses: A Causality-Guided Debiasing Framework. ICLR 2024 Workshop.
  28. Ahmed Abdulaal et al. Causal Modelling Agents: Causal Graph Discovery Through Synergising Metadata- and Data-Driven Reasoning. ICLR 2024. PDF
  29. Jonathan Richens and Tom Everitt. Robust Agents Learn Causal World Models. ICLR 2024. PDF
  30. Amir Feder et al. Causal-Structure Driven Augmentations for Text OOD Generalization. NeurIPS 2024.
  31. Fengxiang Cheng, Chuan Zhou, Xiang Li, Alina Leidinger, Haoxuan Li, Mingming Gong, Fenrong Liu, and Robert van Rooij, Mitigating Spurious Correlations via Counterfactual Contrastive Learning, EMNLP 2025. PDF
  32. 刘奋荣 主编《人工智能逻辑》, 新一代信息技术(人工智能)系列丛书,战略性新兴领域“十四五”高等教育系列教程, 清华大学出版社 2025年12月第1版