Edger Sterjo

Edger Sterjo

Biography

Edger Sterjo is a mathematician working in the financial industry as a “quant” and data scientist. He is a pure mathematician at heart, but with a deep appreciation of mathematics that is applicable to real world problems. His current mathematical interests include dynamic programming, non-parametric Bayesian models, and (on weekends) mathematical physics.

Interests

  • Mathematical Physics
  • Dynamic Programming and Reinforcement Learning
  • Statistical Modeling

Education

  • PhD in Mathematics, 2018

    The Graduate Center, CUNY

  • MPhil in Mathematics, 2015

    The Graduate Center, CUNY

  • MA/BA in Mathematics, 2011

    The City College of New York, CUNY

Recent Posts

Parallel Monte Carlo: Simulating Compound Poisson Processes using C++ and TBB

Introduction In this post we implement a function to simulate random samples of a Compound Poisson variable. A random variable \(L\) is a compound Poisson (CP) random variable if there exists a Poisson random variable \(N\), and a random variable \(S\) such that

Data and their misbehavior

To be honest, I use the clickbaity word “data” in the title when I really mean “sample statistics”. The point of this post is first illustrated using a sample mean, but applies to any estimate computed from data.

Expectation Maximization, Part 2: Fitting Regularized Probit Regression using EM in C++

Introduction In the first post in this series we discussed Expectation Maximization (EM) type algorithms. In the post prior to this one we discussed regularization and showed how it leads to a bias-variance trade off in OLS models.

In Machine Learning, why is Regularization called Regularization?

Introduction Many newcomers to machine learning know about regularization, but they may not understand it yet. In particular, they may not know why regularization has that name. In this post we discuss the numerical and statistical significance of regularization methods in machine learning and more general statistical models.

Expectation Maximization, Part 1: Motivation and Recipe

Introduction This is the first in a series of posts on Expectation Maximization (EM) type algorithms. Our goal will be to motivate some of the theory behind these algorithms. In later posts we will implement examples in C++, often with the help of the Eigen linear algebra library.