Summary
In this book, we have presented a computationally efficient framework for the Bayesian analysis of survival, longitudinal and joint models, built upon the INLA methodology to overcome the limitations of iterative or sampling-based techniques like MCMC. Beyond the significant computational advantages of INLA, a core contribution of this work is the unified and accessible approach. We have demonstrated how several survival and longitudinal models (from proportional hazards and mixed-effects models to more complex multi-state, cure, and zero-inflated structures) can be handled within a single framework. We have treated survival and longitudinal models as building blocks that can be combined into sophisticated joint models, allowing for a unified analysis that accounts for important features of the data such as measurement error, informative censoring or spatial autocorrelation.
The user-friendly software and detailed code examples provided throughout this book are designed to translate complex statistical theory into accessible practice, such that readers can conduct their own analyses. The INLAjoint R package was developed specifically to facilitate the use of the INLA methodology and tailored for survival and longitudinal data analysis, implementing the building bricks approach for intuitive model construction. While most of the illustrative examples were based on simulated data, allowing the reader to better understand the underlying mechanisms of the models presented, we also included some applications based on real clinical data to illustrate the framework’s capability to fit sophisticated models and extract meaningful insights from the complex data typical of modern biomedical research. This is particularly relevant to the advancing field of personalized medicine, where the ability to generate near real-time, individualized predictions from multi-outcome models is an important step forward. Ultimately, by removing computational barriers, the framework presented in this book does more than just answer old questions faster, it provides the framework to move beyond questions of what is computationally feasible toward the pursuit of what is scientifically relevant.