From Monday, March 13 to Friday, March 17, between the hours of 10:00 AM to 12:00 PM (noon).
Locations vary, see below in schedule
Live stream link: https://gatech.zoom.us/j/99381428980
Speaker: Sanjay Shakkottai
Causal Inference Course
Moving away from decision-making based on observed correlations in data, causal inference develops the mathematical foundations for reasoning about the direction of implication — aka cause and effect – for observed dependencies in data. These foundations lead to tools and techniques that can be used for improved models and better decision-making for emerging data-driven systems. This short course covers the motivation, mathematical foundations, and machine learning algorithms for causal reasoning.
Schedule
- Mon, Mar 13: Lecture 1, 10 am – noon EST, Skiles 006 (Coffee and snacks provided)
- Tue, Mar 14: Lecture 2, 10 am – noon EST, Groseclose 119 (Lunch provided)
- Wed, Mar 15: Lecture 3, 10 am – noon EST, Love Manufacturing Building184 (Coffee and snacks provided)
- Thu, Mar 16: Lecture 4, 10 am – noon EST, Groseclose 119 (Lunch provided)
- Fri, Mar 17: Lecture 5, 10 am – noon EST, Love Manufacturing Building 184 (Coffee and snacks provided)
Topics
- Overview
- Motivation, Examples, Interventions
- Independence, Conditional Independence and D-Separation
- Conditional Independence (CI)
- Directed Acyclic Graphs (DAGs)
- D-separation Properties
- Global Markov Property
- Mathematical Formalism
- Structural Causal Model (SCM)
- Graphical Representation
- Interventions Overview
- Observational vs interventional SCM
- ‘Do’ Operation With SCM
- Types Of Interventions
- Alternate representations of ‘do’
- Total Causal Effect
- Interventions Calculus
- Computing the intervention distribution using the observational distribution
- truncated factorization theorem
- Average Causal Effect (ACE)
- kidney stone example (Simpson’s paradox)
- Adjustment
- Definition of confounding
- Valid adjustment set
- invariant conditionals
- Adjustment theorem (parental adjustment, backdoor criterion)
- Do-calculus
- General rules for deriving intervention distribution from the observational distribution (this generalizes the adjustment theorem)
- Front door theorem
- Computing the intervention distribution using the observational distribution
- Learning Causal Models
- Learning with infinite samples
- Learning up to Markov equivalence (CPDAG)
- Faithfulness
- Algorithms for structure learning
- PC Algorithm for CPDA
- ICA algorithm for LiNGAM
- Learning with infinite samples
- Hidden Variables (Latent confounders)
- Instrument variables and 2SLS method
- Conditional Independence (CI) Testing
- Hardness of CI testing
- Partial correlation coefficient
- Kernel based methods
- Conditional randomization
- Classifier based testing
Bio: Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. Shakkottai is a professor in the Engineering department at University of Texas at Austin and holds the Cockrell Family Chair in Engineering #15. He received the NSF CAREER award (2004) and was elected as an IEEE Fellow in 2014. He was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021 and is currently the Editor in Chief of IEEE/ACM Transactions on Networking. Shakkottai’s research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.
For references and other details, click here.
To receive all AI4OPT seminar announcements, please sign up to the mailing list at https://lists.isye.gatech.edu/mailman/listinfo/ai4opt-seminars