Skip to Content

Two MS BAIM Student Teams Place in INFORMS Poster Presentation

Thursday, April 15, 2021


Not one, but two Purdue Krannert teams placed in a prestigious student poster competition as part of the INFORMS Business Analytics Conference! With topics ranging from improving demand forecasts to predictive solutions for construction project payment latency, both teams truly showed the diverse use cases of business analytics.

Each teams' poster abstracts are listed below:


First Place
Understanding and Predicting Project Payment Latency
Team: Theo Ginting, Erika Ergart, Rassul Yeshpayev, and Sheng Yang Chou

This study develops an order-to-cash process map predictive solution to better understand construction project payment latency pre- and post-covid-19. In construction projects, the day a project is sold (and past rescission) to the time payment is received or installment is first accepted is defined as “order-to-cash.” This time window often has many sequential or overlapping tasks that must be performed before the company receives its payment. The motivation for our study is that while order-to-cash is often a challenge to predict and minimize prior to covid-19, it has been even more challenging for businesses to estimate since the pandemic. The risks of delayed processes and delay customer payments can hurt the company’s solvency and financial stability. In collaboration with a national construction company, we develop an order-to-cash process map and redesign their predictive modeling approach to show where the most uncertainty is coming from and provide empirical-based operational recommendations showing how they could reduce order-to-cash not only prior to the pandemic but also during. Our solution was able to improve predictive accuracy during all time periods in our study. We believe practitioners and scholars alike focused on pre-and post-pandemic forecasting, particularly related to accounts receivable or queuing-based problems would find our work valuable.

Third Place
Feature Engineering for Sparse Demand Prediction
Team: Hsiao Yu Hsu, Robyn Campbell, Stefanie Walsh, and Zinnia Arshad

This study provides feature engineering recommendations for predictive modelers, data scientists, and analytics practitioners on how to improve demand forecasts for sparsely demanded specialized products based on collaborative experiments with a national auto parts retailer. Any seasoned modeler knows that predictive modeling is a process and there are many possibilities on how one might clean, pre-process, and format their data prior to training a model. Additional complexity often arises based on the problem characteristics (e.g., temporal response, intermittent demand, sparse demand, etc.) that can make identifying the signal from the noise even more challenging. In the field there is often some discussion of “the art with the science” on best practices to perform to achieve model accuracy good enough to support major business decisions. While there are many suggestions on methodologies for problem types, and general feature engineering ideas, there is no large-scale study to date that provides an in-depth empirical investigation of feature engineering approaches and their associated predictive gains when trying to predict sparse demand – which is one of the most challenging prediction problem classes one can encounter in practice. In collaboration with a large national auto parts retailer, we develop predictive models to predict demand for 47k+ products where 26k of them have less than five units sold in a year. Problems such as these are common in medicines, specialty products, and auto and military spares. What is novel about our study is that we run thousands of feature engineering experiments to identify where we see cross-validated predictive gains for a set of common predictive modeling algorithms. For example, various categorical encoding schemes (one-hot, frequency, label, hash, and target encodings), various scaling/transformation techniques, outlier handling for numeric data types, as well as variable fusion strategies such as interactions, powers, and ratios. This work is unique as much of the literature focuses on predicting product demand with larger quantities (non-sparse demand), supervised learning methods, or general feature engineering ideas. We show how to implement a similar large-scale feature engineering study, provide empirical insights of where we achieved noticeable gains, and why what we realized with our data could likely work with your sparse demand problem.