Features & Comparison

There are many inverse Laplace transform procedures published in the scientific literature. These procedures evaluate the known Laplace transform expression at certain points to approximate the inverse of the Laplace transform at a given point.

There is a class of methods, the Abate-Whitt framework, where the Laplace transform is evaluated always at the same points, independent of the form of the Laplace transform. The Abate-Whitt methods, in particular the Euler and the Gaver method, are very popular and widely used. Our new method, called CME, is far superior to them.

CME methodEuler methodGaver method
Excellent numerical stability, even at machine precisionAcceptable numerical stability up to a given orderPoor numerical behavior, needs multi-precision arithmetic
Smooth, over- and under-shooting free approximationOscillates around sharp changes of the functionPoor approximation around the sharp edges
Gets more accurate when the order is increasedOscillation gets worse when the order is increasedConverges very slowly by increasing the order

The CME method is the ideal choice when working with probability distributions. It always results valid probabilities due to the over- and under-shooting free behavior.

How does it work?

Assume we have a function \(h\). Its value at \(T\) can be expressed as $$ h(T) = \int_0^\infty \delta(T-t) h(t)\,dt,$$ where \(\delta(t)\) is the Dirac delta function. According to the Abate-Whitt framework the \(\delta(T-t)\) function is approximated by a weighted sum of exponentials consisting of \(n\) terms, that is $$\delta(T-t)\approx f_T^n(t)= \frac{1}{T}\sum_{k=1}^n \eta_k e^{-\frac{\beta_k}{T}t}.$$ With this approximation we have that \(h(T)\) can be expressed as the linear combination of its Laplace transform at certain points, thus $$ h(T) = \int_0^\infty \delta(T-t) h(t)\,dt \approx \int_0^\infty \frac{1}{T}\sum_{k=1}^n \eta_k e^{-\frac{\beta_k}{T}t} h(t)\,dt = \frac{1}{T}\sum_{k=1}^n \eta_k h^*\big(\frac{\beta_k}{T}\big),$$ where \(h^*(s)\) is the Laplace transform of \(h\).

We can observe that \(f_T^n(t)\) is just the re-scaled version of \(f_1^n(t)=\sum_{k=1}^n \eta_k e^{-\beta_k t}\). The exact form of this function plays a fundamental role in the Laplace transform inversion, and this is the point where the different ILT methods differ: the Euler, the Gaver and the CME methods provide different \(\eta_k\) coefficients and \(\beta_k\) exponents. In the Euler and the Gaver methods the algorithm to obtain these constants is relatively simple, but the resulting \(f_1^n(t)\) functions can go down to the negative range, too, which can be a source of over- or under-shooting when inverting the Laplace transform. In case of the CME approach determining the coefficients is a complex procedure, with a numerical optimization step also involved. However, the resulting \(f_1^n(t)\) (that is the density function of a concentrated matrix-exponential distribution) is always non-negative, and is very steep (a good approximation of the Dirac delta function). Furthermore, the coefficients can be pre-computed and stored in a file, the complexity of determining them does not make the ILT process slow neither does it make the implementation difficult.

Contact & Cite

The results leading to the CME-based ILT method have been achieved incrementally.

In the following paper we have published a conjecture that in the class of matrix-exponential distributions the cosine-squared form provides the minimal coefficient of variation. The parameters of these concentrated ME distributions where determined up to order 31:

[1] Élteto, T., S. Rácz, and M. Telek. Minimal coefficient of variation of matrix exponential distributions. 2nd Madrid Conference on Queueing Theory, Madrid, Spain (July 2006). 2006. (Download)

In the next paper, 10 years later, a numerical optimizazion-based procedure is presented to obtain the parameters of CME distributions up to order 95. Based on that many results it was possible to examine the tendencies. The paper has shown (experimentally) that the squared coefficient of variation or CME distributions or order \(n\) is below \(2/n^2\), while for Erlang distributions it is only \(1/n\). It became clear that very concentrated distributions can be created as a mixture of exponential functions with a relatively low number of terms:

[2] I. Horváth, O. Sáfár, M. Telek, and B. Zambó. Concentrated matrix exponential distributions. In proceedings of European Performance Evaluation Workshop, EPEW, Oct. 2016. (Download)

We recognized that these concentrated distributions can be used to develop an efficient inverse Laplace transform procedure. The corresponding publication is:

[3] Illés Horváth, Zsófia Talyigás, and Miklós Telek. An optimal inverse Laplace transform method without positive and negative overshoot – an integral based interpretation. Electronic Notes in Theoretical Computer Science, 337:87 -- 104, 2018. Proceedings of the Ninth International Workshop on the Practical Application of Stochastic Modelling (PASM). (Download)

The next step was to find an efficient algorithm to determine the coefficients of CME distributions for any large order (above 95). This algorithm has been published in:

[4] Gábor Horváth, Illés Horváth, Miklós Telek. High order concentrated matrix-exponential distributions. Stochastic Models, in press, 2019. (DOI: https://doi.org/10.1080/15326349.2019.1702058)

Finally, in 2019 we have put together all the pieces and published the entire procedure with the analysis of its properties in

[5] Illés Horváth, Gábor Horváth, Salah Al-Deen Almousa, and Miklós Telek. Numerical inverse Laplace transformation using concentrated matrix exponential distributions. Performance Evaluation, in press, 2019 (DOI: https://doi.org/10.1016/j.peva.2019.102067)

In case of any questions about the method or the implementation do not hesitate to contact us:

Download

The implementation of the procedure in Mathematica, Matlab and IPython environments is available to download from this github repository. There is a separate file (iltcme.json) that contains the pre-computed parameters of the CME distributions. All three kinds of implementations load this file at the first time and use it for inverse Laplace transformation.

The Mathematica implementation also supports arbitrary precision computations. The demo included in the notebook demonstrates how much the Gaver method benefits from the increased precision, but the CME at machine precision is still the best in case of all functions we tested.

Demo

In the demo below you can play around with the CME method. You can inverse Laplace transform one of our test functions, or your own function, too (matrices and complex numbers are also allowed in the expression, for the syntax see mathjs.org).

The implementation is standard Javascript, the results are computed by your browser, on-the-fly, by using machine precision arithmetic. If you want to exclude a given method from the comparison, click the legend of the plot.


Laplace transform to invert:

\(h^*(s)=\)

Inverse transform (for plotting, if known):

\(h(x)=\)

X axis:

Y axis:


Maximum number of function evaluations: