A recent development in Monte Carlo methods makes a significant reduction in the number of simulations needed to converge. This development is the Latin Hypercube Sampling (LHS) method. This paper shows the results of numerical experiments, conducted on a microcomputer, which examined this computing problem using a cash flow model.

It was found that the computing speed of the LHS method is greatly affected by the design of the random sampling procedure. There appears to be an optimum sampling approach which results in a minimum run time. This run time is a small fraction of the time required for conventional Monte Carlo simulation. We believe this reduced run time allows fairly complex cash flow problems to be simulated on a microcomputer, as well as, reducing the costs of runs on mainframes as well.

You can access this article if you purchase or spend a download.