Posts by Collection

cv

CV

Education

  • Master of Science in Financial Engineering, The Chinese University of Hong Kong (Shenzhen)
  • Bachelor of Economics, Guangzhou University

Work experience

  • PhD candidate, Mathematical Institute of Utrecht University (Feb 2022 - until now)
    • Topic: Numerical Methods for Time-inconsistent Stochastic Control problems
    • Promotor: Prof. Kees Oosterlee
  • Teaching Assistant, The Chinese University of Hong Kong (Shenzhen), Dec 2018 - Jan 2022

A complete CV can be obtained upon request.

notes

Notes


ADMM

Background: ADMM (Alternating Direction Method of Multipliers) was proposed 40 years ago and recently attracted lots of attention. The convergence of 2-block ADMM for convex problems was known; however, a recent paper in 2014 showed that multi-block ADMM can diverge even for solving a 3*3 linear system. Interestingly, if we randomly permuted the update order in each cycle (e.g. (132), (231),… compared to traditional cyclic order (123), (123),…), then the algorithm converges. The question is: why?

Our contribution:

  1. Result. We show that for solving linear systems RP-ADMM (randomly permuted ADMM) converges in expectation for any number of blocks.
  2. High-level idea. One simple explanation for this phenomenon is “symmetrization’’: the update matrix of cyclic ADMM is a non-symmetric matrix with complex eigenvalues, and random permutation partially symmetrize the update matrix to make the eigenvalues have a nicer distribution. In fact, the key result is that the eigenvalues of the update matrix of RP-CD (randomly permuted coordinate descent) lies in (-1/3, 1), a smaller region than that of cyclic CD which is(-1,1).
  3. Implications.
    1. Problem level: RP-ADMM can potentially be a good solver for large-scale linearly constrained problems (LP, SDP, etc.)
    2. Algorithm level: It was widely believed that RP rule is better than cyclic rule; however, little theory is established. This work provides one of the first theoretical results that RP rule is better than the cyclic rule.
      • On the Expected Convergence of Randomly Permuted ADMM Ruoyu Sun, Zhi-Quan Luo, Yinyu Ye.

Matrix Completion

–Updated 07/2016. Highlight the local geometry nature of our proof. New slides with a cleaner summary of the proof sketch.

Background: Motivated by applications such as recommender systems (e.g. Netflix prize), the problem of recovering a low-rank matrix from a few observations has been popular recently. It is a prototype example of how to utilize the low-rank structure to deal with big data. There are two popular approaches to impose the low-rank structure: nuclear norm based approach and matrix factorization (MF) based approach. The latter approach is especially amenable for big data problems, and has served as the basic component of most competing algorithms for Netflix prize. However, due to the non-convexity, it seems to be difficult to obtain a theoretical guarantee.

publications

Publications

Published in , 1900


Matrix Completion

–Updated 07/2016. Highlight the local geometry nature of our proof. New slides with a cleaner summary of the proof sketch.

Background: Motivated by applications such as recommender systems (e.g. Netflix prize), the problem of recovering a low-rank matrix from a few observations has been popular recently. It is a prototype example of how to utilize the low-rank structure to deal with big data. There are two popular approaches to impose the low-rank structure: nuclear norm based approach and matrix factorization (MF) based approach. The latter approach is especially amenable for big data problems, and has served as the basic component of most competing algorithms for Netflix prize. However, due to the non-convexity, it seems to be difficult to obtain a theoretical guarantee.


ddd

research

Research


Research Interests

I am mainly interested in stochastic modelling and computational methods in financial mathematics. Currently, I am working on algorithms for general sotchastic control problems in finance.


Publications

  • Michael C.H. Choi, Zhipeng Huang, Generalized Markov chain tree theorem and Kemeny’s constant for a class of non-Markovian matrices, Statistics & Probability Letters, Volume 193, 2023, 109739, ISSN 0167-7152, https://doi.org/10.1016/j.spl.2022.109739.

  • Abstract: Given an ergodic Markov chain with transition matrix P and stationary distribution π, the classical Markov chain tree theorem expresses π in terms of graph-theoretic parameters associated with the graph of P. For a class of non-stochastic matrices M2 associated with P, recently introduced by the first author in Choi (2020) and Choi and Huang (2020), we prove a generalized version of Markov chain tree theorem in terms of graph-theoretic quantities of M2. This motivates us to define generalized version of mean hitting time, fundamental matrix and Kemeny’s constant associated with M2, and we show that they enjoy similar properties as their counterparts of P even though M2 is non-stochastic. We hope to shed lights on how concepts and results originated from the Markov chain literature, such as the Markov chain tree theorem, Kemeny’s constant or the notion of hitting time, can possibly be extended and generalized to a broader class of non-stochastic matrices via introducing appropriate graph-theoretic parameters. In particular, when P is reversible, the results of this paper reduce to the results of P.

teaching

Teaching

, , 1900


Teaching Assistant at CUHK-SZ

  • Fall 2021: MFE5110 Stochastic Models, instructor: Prof. CHEN Nan (CUHK)

  • Spring 2021: MFE5150 Financial Data Analysis, instructor: Prof. LI Lingfei (CUHK)

  • Fall 2020, 2021: STA4020 Statistical Modeling in Financial Market Instructor: Dr. John Wright (CUHK)

  • Summer 2020: MAT3007 Optimization 1, instructor: Prof. Andre Milzarek

  • Fall 2019: DDA6010 Optimization Theory (PhD Course), instructor: Prof. Stark Draper (University of Toronto)

  • Fall 2019: STA4001 Stochastic Process, instructor: Prof. Jim Dai (CUHK-SZ & Cornell)

  • Summer 2019: RMS4060 Risk Management with Derivatives, instructor Prof. HU Sang

  • Spring 2019, 2020, 2021: STA2001 Probability and Statistics 1, instructor: Prof. CHEN Tianshi

  • Spring 2019, 2020, 2021: DDA6001 Stochastic Process (PhD Course), instructor: Prof. Masakiyo Miyazawa (CUHK-SZ & Tokyo University of Science)


Instructor

Not yet.