Projecting an uncertain future, improving understanding of transportation modeling

Based on a master's thesis by Roel Stijl (UU), 2013 in colaboration with B. de Vries (UU) and B. Girod (ETHZ).

untitled.jpg

Excecutive summary:

Citizens in continents from Africa to Asia to Europe have one thing in common, they spend between 1 and 1.5 hours per day, 5% of their lives, traveling. But how far do they travel? And by what means? Transportation is the third largest sector in terms of energy use The last 30 years travel demand has been growing at staggering 3.7% per year. With petroleum products, such as gasoline, diesel or kerosene, being virtually the only source of energy. There are CO2 emission to match. Biofuels were hailed as a solution only a decade ago, but is not a silver bullet for the whole system, with land use issues and questionable CO2 neutrality. Hydrogen cars and electric vehicles might be the new solution that could fill the gap. How many kilometers will be traveled in the future, and by what means? What will be the impact of this be on our planet?
Transportation modeling attempts to answer these questions. To this end many different types of models, with different approaches, levels of detail and predictions. Many models are designed for local policy makers, others predict the future of the planet. No matter the model, if they are to be interpreted correctly it is crucial to understand how they function, how accurate they are, and how they relate to other models. Recently, several of these comparisons have been published, but comparing the results of multiple authors limits objectivity and the level of detail at which the models can be compared. This investigation recreates three established, and one newly constructed, global transportation models. These models are the Timer-travel, Gcam Poles, and Simple model.
The four models were distilled from publications, technical descriptions and cooperation with authors. The models were created in one framework with unified formulations, calibrations, and analysis methods. The data is taken from the established Timer model, and extensive literature analysis. The models were calibrated and validated on historic data, author published results, and other indicators. A methodology was devised to determine ranges for Monte Carlo simulations in an objective manner. The Monte Carlo methods were used to produce probabilistic projections of travel demand, and implicate CO2 emissions and energy use.
The probabilistic projections created were filtered per country on several criteria. This led to the rejection of the dataset for China and from developing nations for two of the models. The valid regions per model were compared in terms of consistency for each model. It was found that these regions did not only have plausible projections, there were several predictions made by them. For example, Industrialized countries are likely to double travel demand by, and developing countries will increase this at least 8-fold. The impact of the latter regarding travel demand will be some six times greater compared to industrialized nations by 2100, from equal demands for both today. Aircraft had at least a dozen times the demand they do today.
Next in the investigation is an analysis of the different modeling approaches. Results show that travel money budget and travel time budget are good predictors of model error. Additionally there is evidence that competition based approaches perform better than per mode growth approaches such as elasticities per mode. It is also concluded that elasticities are more complicated to model than what is presented in the models here investigated; there are a lot of dependences. Fixes to these complications, such as saturation levels, don’t remedy this. It is next found that a feedback mechanism for income on price is essential for a properly fitting model. Lastly saturation is found vital in realistic convergence of developed to industrialized regions.
Finally the investigation makes recommendations for future improvements on models, model comparisons and datasets. It is suggested, among others, that combining lessons learned could yield better models, calibrations should be done worldwide (often just USA), and improving datasets could yield more consistent results worldwide, even if this concerns using local data.

Last edited Apr 2, 2013 at 8:46 PM by roelstijl, version 8