Current Research | Research Papers | Presentations 
Simulation optimization is also known as simulation-based optimization or optimization via simulation. It studies the optimization of complex objective functions represented by a computer simulation model. This is my primary focus area and my ultimate objective is to develop fast and rigorous algorithms and software for large-scale problems with full-featured simulation models. Our paper "Simulation optimization in the era of Industrial 4.0 and Industrial Internet" was awarded the Operational Research Society's Tocher Medal for the best paper published in the Journal of Simulation in 2015 and 2016


Efficient Stochastic Search Algorithms for Simulation Optimization


Our open-source Discrete Optimization via Simulation (DOvS) solver (written in C++) Industrial Strength COMPASS (ISC) is a locally convergent DOvS algorithm with a "clean-up" procedure to provide statistically valid estimates of return solutions. ISC combines competitive finite-time performance with a convergence guarantee and statistical inference at termination. Numerical experiments suggest that ISC has competitive performance that is comparable to top commercial product that does not have convergence guarantee.
>>Software website
>>Download Xu, J., Nelson, B.L., Hong, L. J. 2010. “Industrial Strength COMPASS: A Comprehensive Algorithm and Software for Optimization via Simulation”, ACM Transactions on Modeling and Computer Simulation, 20(1).

Adaptive Hyperbox Algorithm (AHA) is designed for fast local convergence even in high-dimensional DOvS problems. It may scale up problems with dimension up to 50-100. The key of the algorithm's efficiency is a new most promising area geometry that is able to narrow down search space exponentially fast. AHA is also available in the ISC software.
>>Download Xu, J., Nelson, B.L., and Hong, L.J. 2013. An Adaptive Hyperbox Algorithm for Discrete Optimization via Simulation. INFORMS Journal on Computing, 25:133-146, and its appendix

COMPASS with coordinate sampling is a modified version of the original COMPASS algorithm. COMPASS slows down considerably when dimension exceeds 10 and the new sampling strategy leads to significant speed-up for higher-diemsnional problems.
>>Download Hong, L.J, Nelson, B.L., Xu, J. 2011. “Speeding Up COMPASS for High-Dimensional Discrete Optimization via Simulation”. Operations Research Letters, 38(6) 550-555.

We propose to use a global metamodeling technique known as stochastic kriging to improve the efficiency of Discrete Optimization-via-Simulation (DOvS) algorithms. Stochastic kriging metamodel allows the DOvS algorithm to utilize all information collected during the optimization process and identify solutions that are most likely to lead to significant improvement in solution quality.
>>Download Xu, J. 2012. Efficient Discrete Optimization via Simulation Using Stochastic Kriging. In Proc. of 2012 Winter Simulation Conference, pp. 466-477.

We propose a new PSO variant for noisy optimization problems by working with a set of statistically valid global best locations. The new variant works seamlessly with adaptive resampling methods to achieve further performance improvement.
>>Download Taghiyeh, S. and J. Xu. 2016. A new particle swarm optimization algorithm for noisy optimization. Swarm Intelligence, 10(3) 161-192.
>>Download C++ codes for the new PSO algorithm


Optimal Allocation of Simulation Budget for Simulation Optimization

We propose to improve the efficiency of Ranking & Selection by incorporating information from across the domain into a regression equation, with a partitioning of the domain of interest such that in each partition the mean of the underlying function is approximately quadratic. Our new method provides approximately optimal rules for between and within partitions that determine the number of samples allocated to each design location. The goal is to maximize the probability of correctly selecting the best design.
>>Download Brantley, M.W., Lee, L.-H., Chen, C.-H., and Xu, J. 2014. An Efficient Simulation Budget Allocation Method Incorporating Regression for Partitioned Domains. Automatica, 50: 1391-1400.

Random search is a core component of many well known simulation optimization algorithms such as nested partition and COMPASS. Given a fixed computation budget, a critical decision is how many solutions to sample from a search area, which directly determines the number of simulation replications for each solution assuming that each solution receives the same number of simulation replications. We propose a method to (approximately) optimally determine the size of the sampling set and the number of simulation replications and use numerical experiments to demonstrate its performance.
>>Download Zhu, C., J. Xu, C.-H. Chen, L.-H. Lee, J.-Q. Hu. 2016. Balancing search and estimation in random search based stochastic simulation optimization. Published online, IEEE Transactions on Automatic Control, 6 pages.

Correctly selecting a elite subset from a population of solutions is important to the efficiency of population-based stochastic search algorithms in simulation optimization. We propose an extension of the well known OCBA procedure to optimally allocating simulation budget to select a set of top m solutions.
>>Download Zhang, S., L.-H. Lee, E.-P. Chew, J. Xu, C.-H. Chen. 2016. A simulation budget allocation procedure for enhancing the efficiency of optimal subset selection. IEEE Transactions on Automatic Control 61(1) 62-75.

We propose a new optimal sampling allocation policy for PSO that explicitly considers the impact of stochastic noise on selecting the global best position when PSO iterates. This new sampling allocation policy was shown to be much more efficient than applying the original OCBA policy that does not consider the search mechanism of PSO.
>>Download, Zhang, S., J. Xu, L.-H. Lee, E.-P. Chew, W.-P. Wong, and C.-H. Chen. 2017. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization. IEEE Transactions on Evolutionary Computation, 21(2) 206-219.


Multi-fidelity Simulation Optimization

When high-fidelity simulation is extremely time-consuming, low-fidelity simulation models may provide significant computational savings. The basic strategy is to use low-fidelity simulations to broadly explore at a reasonable cost and then focus high-fidelity simulations on promising solutions, but in a balanced way to avoid being misled by unknown bias in low-fidelity models.
>>Download Xu, J., S. Zhang, E. Huang, C.-H. Chen, L.-H. Lee, N. Celik. 2016. MOTOS: Multi-fidelity optimization via ordinal transformation and optimal sampling. Asia-Pacific Journal of Operational Research, 33 (3), 26 pages.
>>Download Xu, J., E. Huang, L. Hsieh, L.-H. Lee, C.-H. Chen. 2016. Simulation optimization in the era of Industrial 4.0 and Industrial Internet. Journal of Simulation, 10(4) 310-320. Featured article.


Parallel Simulation Optimization

Real-life simulation optimization applications often deal with large-scale simulation models that are time-consuming to execute. Parallel computing environments, such as high performance computing clusters and cloud computing services, provide the computing power needed to scale to such applications. In this paper, we show how the Empirical Stochastic Branch and Bound algorithm, an effective globally convergent random search algorithm for discrete optimization via simulation, can be adapted to a high-performance computing environment to effectively utilize the power of parallelism. We propose a master-worker structure driven by MITRE's Goal-Directed Grid-Enabled Simulation Experimentation Environment. Numerical experiments with the popular Ackley benchmark test function and a real-world simulation called runwaySimulator demonstrate the number of cores needed to achieve a good scaled efficiency of parallel empirical stochastic branch and bound for increasing levels of simulation run times.
>>Download Rosen, S., P. Salemi, B. Wickam, A. Williams, C. Harvey, E. Catlett, S. Taghiyeh, and J. Xu. Parallel Empirical Stochastic Branch and Bound for Large-scale Discrete Optimization via Simulation. Proceedings of 2016 Winter Simulation Conference, 626-637.


Distributional Robust Simulation Optimization

In typical simulation optimization problems, probability distributions are fit to historical data for the chance variables and then optimization is carried out, as if the estimated probability distributions are the "truth". However, this perspective is optimistic in nature and can frequently lead to sub-optimal or infeasible results because the distribution can be misspecified and the historical data set may be contaminated. In this paper, we propose to integrate existing approaches to decision making under uncertainty with robust and efficient estimation procedures using Hellinger distance. Within the existing decision-making methodologies that make use of parametric models, our approach offers robustness against model misspecifications and data contamination.
>>Download Vidyashankar, A. N., J. Xu. 2015. Stochastic optimization using Hellinger distance. Proceedings of 2015 Winter Simulation Conference, 3702-3713.