Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
A complete CV can be obtained upon request.
Background: ADMM (Alternating Direction Method of Multipliers) was proposed 40 years ago and recently attracted lots of attention. The convergence of 2-block ADMM for convex problems was known; however, a recent paper in 2014 showed that multi-block ADMM can diverge even for solving a 3*3 linear system. Interestingly, if we randomly permuted the update order in each cycle (e.g. (132), (231),… compared to traditional cyclic order (123), (123),…), then the algorithm converges. The question is: why?
Our contribution:
–Updated 07/2016. Highlight the local geometry nature of our proof. New slides with a cleaner summary of the proof sketch.
Background: Motivated by applications such as recommender systems (e.g. Netflix prize), the problem of recovering a low-rank matrix from a few observations has been popular recently. It is a prototype example of how to utilize the low-rank structure to deal with big data. There are two popular approaches to impose the low-rank structure: nuclear norm based approach and matrix factorization (MF) based approach. The latter approach is especially amenable for big data problems, and has served as the basic component of most competing algorithms for Netflix prize. However, due to the non-convexity, it seems to be difficult to obtain a theoretical guarantee.
Published in , 1900
–Updated 07/2016. Highlight the local geometry nature of our proof. New slides with a cleaner summary of the proof sketch.
Background: Motivated by applications such as recommender systems (e.g. Netflix prize), the problem of recovering a low-rank matrix from a few observations has been popular recently. It is a prototype example of how to utilize the low-rank structure to deal with big data. There are two popular approaches to impose the low-rank structure: nuclear norm based approach and matrix factorization (MF) based approach. The latter approach is especially amenable for big data problems, and has served as the basic component of most competing algorithms for Netflix prize. However, due to the non-convexity, it seems to be difficult to obtain a theoretical guarantee.
ddd
I am mainly interested in stochastic modelling and computational methods in financial mathematics. Currently, I am working on algorithms for general sotchastic control problems in finance.
Michael C.H. Choi, Zhipeng Huang, Generalized Markov chain tree theorem and Kemeny’s constant for a class of non-Markovian matrices, Statistics & Probability Letters, Volume 193, 2023, 109739, ISSN 0167-7152, https://doi.org/10.1016/j.spl.2022.109739.
Abstract: Given an ergodic Markov chain with transition matrix P and stationary distribution π, the classical Markov chain tree theorem expresses π in terms of graph-theoretic parameters associated with the graph of P. For a class of non-stochastic matrices M2 associated with P, recently introduced by the first author in Choi (2020) and Choi and Huang (2020), we prove a generalized version of Markov chain tree theorem in terms of graph-theoretic quantities of M2. This motivates us to define generalized version of mean hitting time, fundamental matrix and Kemeny’s constant associated with M2, and we show that they enjoy similar properties as their counterparts of P even though M2 is non-stochastic. We hope to shed lights on how concepts and results originated from the Markov chain literature, such as the Markov chain tree theorem, Kemeny’s constant or the notion of hitting time, can possibly be extended and generalized to a broader class of non-stochastic matrices via introducing appropriate graph-theoretic parameters. In particular, when P is reversible, the results of this paper reduce to the results of P.
, , 1900
Fall 2021: MFE5110 Stochastic Models, instructor: Prof. CHEN Nan (CUHK)
Spring 2021: MFE5150 Financial Data Analysis, instructor: Prof. LI Lingfei (CUHK)
Fall 2020, 2021: STA4020 Statistical Modeling in Financial Market Instructor: Dr. John Wright (CUHK)
Summer 2020: MAT3007 Optimization 1, instructor: Prof. Andre Milzarek
Fall 2019: DDA6010 Optimization Theory (PhD Course), instructor: Prof. Stark Draper (University of Toronto)
Fall 2019: STA4001 Stochastic Process, instructor: Prof. Jim Dai (CUHK-SZ & Cornell)
Summer 2019: RMS4060 Risk Management with Derivatives, instructor Prof. HU Sang
Spring 2019, 2020, 2021: STA2001 Probability and Statistics 1, instructor: Prof. CHEN Tianshi
Spring 2019, 2020, 2021: DDA6001 Stochastic Process (PhD Course), instructor: Prof. Masakiyo Miyazawa (CUHK-SZ & Tokyo University of Science)
Not yet.