Researchers re-evaluate how we value transportation

Transportation agencies and metropolitan planning organizations often wrestle with how to properly value transportation investments, especially when it comes to things that can’t be measured in terms of vehicle delay, such as multimodal access and environmental justice. Some of these challenges are tackled in a new issue of Research in Transportation and Business Management, edited in part by SSTI.

The special issue, titled “[Re]Evaluating How We Value Transportation,” was co-edited with Wes Marshall from the University of Colorado and Dan Piatkowski from the University of Nebraska. It features 16 articles from leading researchers and practitioners throughout the U.S. and Canada, along with an introductory editorial from the editors.

Those familiar with SSTI’s recent work in the development and implementation of accessibility metrics may be interested in a paper describing a new measure of non-work accessibility (available free until June 8), which the Virginia Department of Transportation recently implemented in its Smart Scale project prioritization process. Other topics include:

  • Emerging best practices in urban transportation
  • The early impacts of California’s shift from LOS to VMT in measuring transportation impacts
  • Active transportation metrics, intangible benefits, and approaches to funding
  • Reframing traffic safety, including a defense of urban street trees
  • Challenges in properly measuring housing and transportation costs
  • Urban parking demand and lessons in parking policy implementation
  • Curb use and surge pricing in the age of ride hailing
  • Methods for evaluating the benefits of freight movement
  • Problems with static traffic assignment in travel-demand modeling

**

Research in Transportation Business & Management Volume 29, December 2018, Pages 26-36Non-work accessibility and related outcomes 

Accessibility metrics, which describe the ease of reaching destinations, are widely recognized as valuable indicators of transportation system performance. After decades of academic research on the subject, accessibility metrics are gaining use in practice. The most notable applications, however, focus solely on access to jobs. While commute travel is closely related to peak period travel demand, it makes up only a small share of overall travel.

This study presents a measure of local access to a wide range of non-work destinations, calibrated using data from Virginia. It focuses specifically on walking access as a proxy for multimodal accessibility and supportive land uses. The metric is intuitive, it can be calculated using available software and data, and it relates to important outcomes such as travel behavior and economic productivity. This work presents an opportunity for practitioners to incorporate accessibility metrics in various decision making applications and improve upon them as their knowledge and use of these metrics grow.

**

Cities and the future of urban transportation: A roadmap for the 21st century

Introduction

Lively, diverse, intense cities contain the seeds of their own regeneration, with energy enough to carry over for problems and needs outside themselves. — Jane Jacobs, 1961, p. 448

In the United States, the transportation profession is undergoing a period of profound transformation. For much of the last century, transportation projects have been underwritten, in large part, by federal gas tax revenues, which are collected by the United States Department of Transportation and reallocated to states, metropolitan areas, cities, and transit operators to underwrite the construction of transportation projects of federal interest. This has generated a transportation process that makes cities and regions the supplicants of federal largesse, promoting the advancement of projects that meet the requirements of specific federal grant programs, programs which often emphasize automobile-oriented performance measures, such as level-of-service and vehicle delay, over other objectives. Last increased in 1993, and there is little political willingness, at either the state or federal levels, to increase the gas tax. This financial climate, coupled with increasingly vocal advocates for traffic safety and livability, has encouraged municipal transportation agencies to develop new approaches to addressing urban transportation needs.

This paper provides a synthetic review of these changes in professional practice, detailing the underlying factors that motivate them, and illustrating how they can be understood as part of a broader, more comprehensive revolution in urban transportation. It aims to provide professionals and practitioners with a broad, high-level lens through which to understand the current policy debates and seemingly diverse, ad-hoc practices that have proliferated in U.S. cities over the last decade. Considered holistically, these practices can be understood as a professional response to changing nature of the social and financial context of urban transportation, which have encouraged cities to increasingly focus on two urban development objectives: value capture and livability. These in turn cluster into seven associated practices:

1. Focus on moving people, not cars
2. Promote short trips
3. Protect vulnerable users
4. Encourage lingering
5. Trial and error
6. Leverage the powers of public transit authorities

This paper details these concepts and practices and provides examples of their current use. Nonetheless, the effects of many of these practices are often inferred rather than known, and there has been little systematic evaluation of their effectiveness, nor on how they may relate to the broader regional transportation planning process. This paper concludes by detailing the research and evaluation program needed to begin to make informed decisions, highlighting the important supporting role that researchers must play in these efforts. 

Two notes on the scope of this review are warranted. First, while many of these practices have been adopted in European countries, this paper focuses specifically on planning practices in the United States, which operates under a federal framework where transportation decisions are filtered through state and regional transportation agencies. Nevertheless, many of these practices have either originated or been significantly advanced in the US, which connects to economic factors that relate to cities more generally. As such, the paper is relevant to an international audience seeking to relate specific transportation decisions to broader urban development objectives. Second, because of the wide-reaching nature of this review, it is not intended to be a comprehensive assessment of the individual practices. Specific practices are examined in light of the relevant scholarly literature, but the primary focus of this article is to synthesize these practices into a broader framework through which to understand them. Critical reflection on the implications of these practices is both needed and missing. This article thus concludes by presenting the specific measures that we believe will be critical for evaluating the success of these practices, as well as the critical role that academics must play in providing a base of empiric knowledge on which to base informed decision making.

http://dx.doi.org/10.1016/j.rtbm.2017.09.001

**

Leaving level-of-service behind: The implications of a shift to VMT impact metrics Amy E.Lee and Susan L.Handy 

Concern about climate change has led to policies in California that aim to decrease greenhouse gas (GHG) emissions from transportation. Although these policies mostly promote technological innovations, some policies aim to reduce GHG emissions by reducing the amount of driving, measured in vehicle miles traveled (VMT), through land use and transportation planning. The focus on VMT reduction represents a dramatic shift for the land use and transportation planning fields, which have traditionally prioritized auto mobility by reducing vehicle delay, measured as level of service (LOS). California has taken the bold step to replace LOS with VMT as the metric of transportation impact in the environmental review process for land use and transportation plans and projects under the California Environmental Quality Act (CEQA).

This study compares these two metrics – VMT and LOS – and their implications for a sample of land use projects located in Davis, California. We compare the LOS impacts analyzed in the environmental impact reports for the projects to forecasted VMT impacts that we quantify using several available VMT estimation models. Our analysis of LOS mitigation shows how the CEQA process per se impacts the built environment, often in ways that increase vehicle capacity and thus VMT. We find that a switch to VMT metrics may lead to streamlining for projects that reduce travel demand because of their location or design, whereas LOS metrics have led communities to build expensive, capacity-increasing mitigation measures to ease vehicle delay. Finally, we show that the vehicle capacity constructed to mitigate LOS may contravene the goals and aspirations of many communities in California, as well as the state’s goals for GHG reductions, and is unlikely to solve the congestion problem caused by misplaced land use development.

**

Affordable for whom? Introducing an improved measure for assessing impacts of transportation decisions on housing affordability for households with limited means

The close connection between transportation and housing affordability has gained widespread recognition in recent years, largely in response to two related concerns. Yet, the ways we typically measure housing affordability have received very little critical attention, particularly as it relates to transportation decision-making. This study addresses this gap by critically reviewing the two most common measures, which I suggest are limited due to their inability to account for the large effects of household income and characteristics, as well as variations in transportation costs (in the case of one approach). In light of these shortcomings, I introduce an alternate measure – the location-sensitive residual income (LSRI) approach – which reflects the realities facing households more fully by incorporating differences associated with household income, composition, childcare requirements, and residential location. An application of the LSRI approach in the Denver metro suggests that measures generated using LSRI and more typical approaches result in very different findings, and therefore, very different implications about the challenges faced by households with limited means. Findings demonstrate that conclusions about the social impacts of transportation infrastructure and service on housing affordability are highly dependent on the measures used. I argue that the LSRI approach offers a substantially more nuanced means of evaluating the impacts and benefits associated with transportation decisions, and in particular, how decisions promote – or perhaps challenge – social justice among vulnerable populations.

**

An adaption of the level of traffic stress based on evidence from the literature and widely available data by CaryBearnaCharleneMingusbKariWatkinsc

Bicyclist quality of service measures are often difficult to apply on the network rather than facility level. Analyzing bicycle infrastructure on the network level is a critical process for managing bicycle infrastructure planning, design, and construction. The Level of Traffic Stress (LTS) measure fills this need for a network level measure. However, the originally proposed LTS measure leaves some gaps related to the designation of facilities and requires data that may be difficult to collect on a network level. The adapted LTS measure proposed here is based on traffic, roadway, and bikeway characteristics data available to most planning and engineering agencies and on evidence from the literature. The adapted LTS was used to classify and analyze bike network connectivity in two case studies to assess the methodology and demonstrate practical applications in infrastructure management. The first was a six-mile buffer zone of the Atlanta BeltLine Eastside Trail, and the second was a three-mile transit access zone around three transit stations in southwest Atlanta. The analysis was done in ArcGIS and provides results that can be easily interpreted by the public and decision makers, while relying on quantifiable traffic and roadway characteristics.

**

We count what we care about: Advancing a framework for valuing investments in active modes DanielPiatkowskiaWesleyMarshall

Transportation investments are traditionally valued through a “mobility-based” transportation planning paradigm focused on user volume. Common metrics include counts, peak hour level of service, peak hour delay, and travel time – and the attention has been almost exclusively on automobile travel. In recent years, more communities are investing in active modes, and with this influx in investments have come pressure to evaluate their impact(s). Bicycle and pedestrian count programs are increasingly called for as they appear to offer a seemingly straightforward metric for valuing investments in active modes. However, this push towards bicycle and pedestrian count programs has come without a theoretical foundation for the role of count data in the planning process. In this paper, we first assess the validity of count data applied to a range of applications in planning for active modes of transportation. We then reflect on the benefits and limitations of count data in the planning process, focusing on the issues related to prioritizing count data as the primary means by which we value investments in active modes. Finally, we propose a new conceptual framework for planning for active modes, making the case for additional metrics that more accurately reflect the myriad benefits of these modes in the planning process.

**

Measuring the wind through your hair? Unravelling the positive utility of bicycle travel, by Kevin J.Krizek https://doi.org/10.1016/j.rtbm.2019.01.001

The intrinsic qualities of bicycling in urban areas are oft-asserted and difficult to measure. These benefits may therefore be undervalued. Considering the majority of research of bicycling in cities has captured functional characteristics (e.g., travel timecost, health), less is known of how bicycling provides intrinsic benefits (e.g., the feeling of wind in one’s hair, social cohesion, stress relief) and how such benefits could be incorporated into travel analysis. I argue how and why access to a destination by bicycling can be more valuable than access to destinations by other modes—largely owing to intrinsic qualities—and point to opportunities and challenges for measurement. Drawing from the idea that travel often times has a positive utility, coupled with the emerging research base that points to how bicycling advances emotional well-being, I extend a framework based on measuring access to destinations and point to future challenges and opportunities in doing so.

**

Valuing freight transport: A Canadian example of the role of selected methodologies by JoeRowsellaMary R.BrooksbKristianBehrenscTrevorHeaverdJohnLawsone

This paper examines four methods used to assess the value of the benefits and costs of freight transportation from a broader lens than just economic value. As with any mode of transportation, the benefits and costs of freight transport are numerous and diverse, and there is no widely agreed-on approach for assessing the values. The key focus of the paper is to examine how transport industries and modes are valued. However, the methodologies ultimately chosen need to reflect the purpose (or objective) of the valuation. In exploring the strengths and limitations of four methods of assessing value, this paper may assist others exploring the value of various modes of transportation. The paper concludes that adoption of more than one approach to measuring value provides a more holistic understanding of the role played by a mode of transportation, whether the perspective is one of government, industry or citizen.

**

Urban clear zones, street trees, and road safety

  • Existing research raises doubts regarding the efficacy of clear zones on road safety.
  • Road safety outcomes are particularly questionable for street trees in urban contexts.
  • We mapped tree canopy and street-tree locations in GIS for Denver, Colorado.
  • Larger tree canopies were associated with fewer injury/fatal and total crashes.
  • Municipalities and transport agencies should reassess clear zone safety outcomes.

The roadside area where fixed-object hazards are explicitly minimized is called the clear zone, which became standard design practice soon after the 1966 Congressional hearings on road and automobile safety. Mounting evidence, however, is beginning to cast doubt on what we think we know about the impact of roadside clear zones on actual safety outcomes. This is particularly an issue with street trees in urban contexts, which are known to provide economic, environmental, and livability benefits but are also widely considered a road safety detriment.

This research relies upon advances in remote sensing to map both tree canopy and street-tree locations in GIS for the entirety of the city and county of Denver, Colorado. We then statistically test the association between street trees and seven years of road safety outcomes while controlling for factors known to be associated with crash outcomes.  Despite fifty years as standard design practice, our results suggest that the expected safety benefit of roadside clear zones – at least with respect to street trees in an urban context – may be overstated. In fact, larger tree canopies that extend over the street are associated with fewer injury/fatal crashes as well as fewer crashes overall while holding all other variables constant. The number of street trees per mile associates with improved safety in wealthier neighborhoodsbut can be detrimental in low-income neighborhoods; this inconsistency represents an equity issue in need of future research. When assessing the safety impact of street trees in the clear zone, municipalities and transportation agencies need to be more cognizant of how street design may impact road user behaviors, particularly related to issues that directly affect safety such as travel speeds and driver awareness.

**

“A little bit happy”: How performance metrics shortchange pedestrian infrastructure funding

CarrieMakarewiczaArlieAdkinsbCharlotteFreicAudreyWenninkd

After decades of inattention to the issue, cities and regions increasingly recognize the role of pedestrianinfrastructure to improve safety, public health, air quality, accessibilitytravel choices, and economic development. But extraordinary gaps exist between pedestrian infrastructure needs and what is funded and built. To understand why this gap persists, even as attention to pedestrian issues grows, we conducted 50 interviews about pedestrian funding with transportation professionals from different levels of government in three regions that have prioritized active transportation: Chicago, Illinois; Denver, Colorado; and Portland, Oregon. We analyzed interviews along with each region’s transportation plans, fiscally constrained budgets, and other policy and planning documents. Our analysis revealed three systemic barriers at the regional level that perpetuate the underfunding of pedestrian infrastructure: (1) overall transportation funding shortages made worse by the substantial and growing burden of operating and maintaining aging regional mobility systems; (2) performance and evaluation metrics used in funding decisions are biased toward regional mobility rather than accessibility; and (3) the relatively small scale of individual pedestrian projects often keeps them from being considered regionally significant or scoring highly on metrics related to regional impact. In addition to identifying the need for additional funding sources, the regions we studied used other strategies to address these challenges that may offer lessons for other regions. These include: collecting new data and establishing performance measures that better capture the benefits of active travel modes and their unique contributions to broad policy goals; coordinating across a region to bundle pedestrian projects into larger funding packages that can meet regional significance criteria; and creating regional pedestrian plans that demonstrate how smaller pedestrian projects contribute to regional goals. 
**
Toward more comprehensive evaluation of traffic risks and safety strategies

Despite large investments in traffic safety programs and technologies, motor vehicle accidents continue to impose high social costs. New strategies will be needed to achieve ambitious traffic safety targets such as Vision Zero. Recent research improves our understanding of factors that affect traffic risks and ways to increase traffic safety. Applying this knowledge requires a paradigm shift, a change in the way we define problems and evaluate potential solutions. The old paradigm assumed that driving is generally safe and favored targeted safety programs that reduce special risks such as youth, senior, impaired and distracted driving. The new paradigm recognizes that all vehicle travel imposes risks, and so, in addition to targeted programs also supports vehicle travel reduction strategies such as more multimodal planning, efficient transport pricing, Smart Growth development policies and TDM programs. These strategies tend to provide large co-benefits, in addition to safety. This article examines our emerging understanding of traffic risks and new ways to increase safety.

**

Transition costs and transportation reform: The case of SFpark

This article describes and analyzes the backlash that arose when San Francisco attempted to expand its program of dynamic parking pricing, called SFpark, into an unmetered neighborhood. I place this episode in the context of the literature on transition costs and policy reforms: many beneficial policies are stymied because they cannot overcome a period of initial opposition. Compared to other policy reforms, transportation pricing should be less vulnerable to such transition costs, because it generates revenue that policymakers can use to reduce political opposition. As I demonstrate, however, when revenue is not used with political considerations in mind, it can exacerbate rather than mitigate political conflict.

**

Giving parking the time of day: A case study of a novel parking occupancy measure and an evaluation of infill development and carsharing as solutions to parking oversupply Calvin G.Thigpen1

In the US, parking is oversupplied in both residential and commercial settings, a consequence of the widespread application of minimum parking standards, which typically supply excess parking for peak-hour demand. Yet the evidence base for specific parking reforms is thin, and alternatives to the peak-hour metric still under-explored. In this illustrative case study, I study time-of-day parking occupancy in a 146-unit apartment complex in Davis, CA to obtain preliminary results, demonstrate new data collection and analysis methods that can be applied at a wider scale, and explore policy analyses that are facilitated by this novel, detailed parking behavior information. First, I systematically observe parking occupancy in the apartment complex to understand hourly variation. I find that the peak and off-peak occupancy rates in the apartment complex are 55% and 34%. I calculate that the 45% of spaces left unused across a day could be converted to roughly 32 new townhouses. I also estimate a multilevel latent class analysis to identify distinct patterns of household parking use and find that 54% of households could be likely candidates for carshare adoption. I conclude by noting implications for housing availability, shared parking standards, the rise of shared and autonomous vehicles, and future research. 

**

Investigating Uber price surges during a special event in Austin, TX, JunfengJiao 

The purpose of this study was to evaluate the characteristics of Transportation Network Company (TNC) Uber’s surge pricing during a special event. Using data collected using Uber’s developer API over the 2015 Fourth of July weekend, this research investigated the form of price surge multipliers during periods of high demand. Regression models showed surge price was not correlated with ride wait time for July 3, July 4, or July 5, but it was correlated with ride request time in all three nights. July 4 had the strongest correlation and more instances of surge pricing, and those instances were greater in magnitude that the other evenings studied. This research has practical implications for transportation planners in that it reveals the obscurity of the price surge mechanisms. The unpredictability and lack of transparency surrounding surge pricing poses challenges for those working to incorporate TNCs into a city’s transportation operations.

**

Forecasting the impossible: The status quo of estimating traffic flows with static traffic assignment and the future of dynamic traffic assignment

Under a Creative Commons license open access
Roadway expansion proposals are evaluated primarily with travel time metrics including vehicle hours traveled (VHT) and vehicle hours of delay (VHD). Travel time metrics have been criticized for ignoring other travel modes, over-emphasizing mobility over accessibility, and failing to account for economic externalities. However, there is an even more fundamental problem. The travel time metrics are inaccurate because they rely on Static Traffic Assignment (STA), a 40-year old approach that routinely forecasts unfeasible future traffic flows that exceed capacity. Basing metrics on these impossible volumes produces invalid results. The common practice of exporting link volumes or subarea trip tables to microsimulation fails to address the STA problem because the unrealistically high STA traffic forecasts are forced onto a capacity-constrained network. Inaccurate travel time modeling helps to explain why so many roadway projects fail to deliver promised travel time savings. Replacing STA with Dynamic Traffic Assignment (DTA) produces more realistic metrics. A case study from the Portland Maine region is presented where STA and DTA are compared with the same inputs. The DTA model fits base year count traffic count much better. The DTA model produces more much lower and more realistic estimates of congestion relief from freeway widening.
1. Introduction

Transportation project planning (roadway and transit) and regional transportation plans are analyzed using regional transportation models. These models forecast future traffic volumes and travel times for different alternatives and are used to evaluate the impacts and benefits of different projects and plans.

There have been many advances in regional transportation modeling over the past 40 years, but the approach used for vehicle assignment and estimating travel speeds remains unchanged. The primary reason appears to be a lack of understanding by both modelers and managers that there are major problems. This approach (Static Traffic Assignment or STA) does a poor job of accounting for peak period freewaycongestion. There are available methods (Dynamic Traffic Assignment or DTA) that do a much better job of accounting for peak period freeway congestion. So far, DTA has attracted a following of researchers in limited applications but has not been adopted as a general substitute for STA.

This paper:

1)

Discusses STA problems at length because they are not widely known or appreciated, particularly outside of a group of DTA researchers

2)

Uses a regional model of the Portland Maine region that uses both DTA and STA to demonstrate:

a.

Regional DTA is practical

b.

Regional DTA and STA produce different results for the same alternatives

c.

The DTA metrics are a better basis for planning

3)

Describes why microsimulation cannot correct for STA problems

4)

Discusses how STA problems help to explain why so many freeway expansion projects have so often failed to deliver promised congestion relief

Reliance on STA has encouraged over-investment in ineffective freeway expansion. Switching to DTA would support decision making based on more accurate information.

1.1. Background: Static traffic assignment

Regional transportation models estimate traffic volumes and speeds for thousands of roadway segments for different times throughout the day. Since these models were first implemented on computer mainframes, they have become more complex in some model components. However, the fundamental way that travel time is estimated is unchanged since the 1970s (Boyce, 2004).

STA models estimate travel speeds and delay for each roadway segment in the network based on the volume-to-capacity (V/C) ratio. The classic formula is the Bureau of Public Roads (BPR) equation (Cambridge Systematics, 2012):

ti=t0i∗(1+α∗viciβ

where: ti= congested flow travel time on link i. t0i= free-flow travel time on link i. vi= volume of traffic on link i per unit of time. ci= capacity of link i per unit of time. α= alpha coefficient. β= beta coefficient.

Some models use non-BPR delay equations, but these equations share the same characteristics: a) they treat each roadway segment as independent, b) higher V/C equals more delay, and c) modeled traffic volume can and often does exceed capacity.

The meaning of “capacity” in these equations has been confused by inconsistent usage in the past, but current best practice is to use “ultimate capacity”, i.e. the “maximum volume that should be assigned to a link by the forecasting model” (Cambridge Systematics, 2012).

Planning agencies commonly make statements about future traffic volumes exceeding capacity. Examples include:

Chicago region: “Maps projecting future traffic patterns still showed roads that would be over capacity in 2040, despite improvements …” (Chicago Tribune, 2016)

Kansas City region: “If a segment experienced five or more hours with a V/C ≥ 1, it was labeled as severe congestion. Segments with three to four hours with a V/C ≥ 1 were moderate and segments with two hours or less with a V/C ≥ 1 were normal or no congestion.” (Mid-American Regional Council)

Tampa region: “Cost estimates have been prepared for capacity projects that serve clusters of 5,000 or more jobs and that improve a major road with 2040 traffic volumes that are 30% or more over capacity.” (Plan Hillsborough, 2014)

When “capacity” means “ultimate capacity”, these types of statements make no sense. Traffic volumes cannot exceed ultimate capacity for more than a few minutes before the system collapses into stop-and-go conditions with traffic flow significantly lower than capacity. Over-capacity model results are not accurately describing the future, but instead are highlighting serious model errors.

Modelers use a wide range of STA volume-delay curves as illustrated in Fig. 1.

Fig. 1

Fig. 1. Modeled speed as a function of (V/C) from Cambridge systematics 2012 (red box highlights V/C > 1 conditions that are impossible except for short, transient periods). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Note that in Fig. 1, all the models represented calculate speeds for impossible over-capacity conditions (red box). For example, the illustrated models translate V/C of 110% into speeds ranging from about 30 to 80% of free-flow speed. In fact, there should be no case where V/C exceeds 100%, but a curve that steep would prevent the STA approach from converging. The steeper functions represent attempts by those modelers to reduce the number of vehicles assigned above roadway capacity. The less steep functions represent attempts by other modelers to get enough vehicles assigned to the less congested roadway sections to match traffic counts. Neither approach works well. If V/C really could be translated accurately into speed and delay, there presumably would not be such a wide range of parameters in use.

The STA problems cannot be solved by using different capacity numbers. Prior to the consensus on using ultimate capacity, common practice was to use a level-of-service C capacity with significantly lower values. These lower capacity values were paired with different delay parameters, but the problems were the same. A wide range of capacities and delay parameters have been tried. Some modelers arbitrarily set a minimum speed, e.g. 10 m.p.h. – but there is no way to set a maximum traffic flow in STA. Therefore, the minimum speed simply cuts off the feedback of longer travel times that is intended to limit traffic volume. The fundamental problems with STA are that road segments are not independent, and the speed of a segment cannot be determined from the traffic volume. There is no set of parameters that can address these problems.

STA was adopted in the 1970s because it worked on the computers of that time and gave traffic forecasts that were roughly correct, especially for estimates of daily traffic volumes in existing conditions. However, STA has two fundamental problems that make it ill-suited at analyzing peak period congestion. First, most peak period congestion, especially on freeways, involves traffic queuing behind bottlenecks. Therefore, the roadway segments are not independent, as is assumed in STA. Second, these bottlenecks meter traffic flow to the capacity of the bottlenecks. In sharp contrast, STA allows modeled traffic volumes to exceed capacity. This misrepresents traffic not only on the over-capacity segment, but on downstream segments that the excess traffic could not really reach because it either would divert to other routes or be queued upstream.

The current guide to best modeling practice states: “… static equilibriumprocedures will continue to be used for regional modeling for the time being” (Cambridge Systematics, 2012). Therefore, these problems are present in STA-based congestion metrics in every region in the United States. An extreme example is presented in depth as an illustration.

1.2. I-405 in Orange County California

After a decade of planning, the $1.9 billion widening of I-405 in Orange County, California began construction early in 2018 (Emery, 2018). This roadway is highly congested today. Nevertheless, the project planning documents forecast that traffic will get incredibly worse without widening. For example, a 13-mile morning southbound trip that took 37 min in the 2009 base model is forecast to take 2 h and 43 min, in the 2040 No Build alternative (California Department of Transportation, 2015). This corresponds to a future average travel speed of 5 m.p.h.

Appendix 1 of the Traffic Study (Albert Grover & Associates, 2011) for this project is helpful in illustrating problems with STA. Fig. 2 reproduced from Appendix 1 shows thousands of real-world data points for flow and speed on I-405.

Fig. 2

Fig. 2. I-405 speed and flow data (Albert Grover & Associates, 2011).

Fig. 2 shows:

For traffic volumes of <1000 vehicles per lane per hour, speeds are clustered around the speed limit (70 mph).

There are few data points for speed >2000 vehicles per lane per hour, and no data points exceeding about 2300 vehicles per lane per hour – indicating a firm upper bound for traffic throughput.

Between 1500 and 2000 vehicles per hour, speeds vary widely with most between 30 mph and 70 mph.

Between 1000 and 1500 vehicles, speeds are even more dispersed the largest cluster being around 70 mph, but a significant group between 10 and 20 mph.

The range of volumes that is relevant for planning lies between 1000 and 2000 vehicles per lane per hour. There is no congestion below this range, and above this range, traffic flow is unstable. In the 1000–2000 range that is critical for planning, there is no apparent relationship between volume and speed. The lack of a relationship is largely because freeway roadway segments are highly interdependent. Recurring freeway congestion often results from congestion at downstream mainline segments, on ramps, off ramps, and weave areas. Anyone who has driven much in southern California has experienced sudden slowdowns and even stoppages on freeways at any time of day that are apparently unrelated to traffic flow on the immediate segment. In Fig. 2, these conditions are illustrated in the many low-speed observations that occur at a wide range of traffic volumes.

While the idea that V/C can be translated into a speed is intuitive, it is inaccurate. Chiu et al. (2011) summarize the problems with V/C-based STA: “The drawback of using V/C is that it does not directly correlate with any physical measure describing congestion (e.g., speed, density, or queue).” Despite the absence of a clear and consistent relationship between flow and speed, STA requires fitting a monotonically decreasing volume-delay curve to these data. Fig. 3 shows the curve (red line) selected in the Traffic Study from several candidates.

Fig. 3

Fig. 3. I-405 speed and flow data.

As shown in Fig. 3, the volume-delay curve does a poor job of matching the data except perhaps for volumes of <1000 per lane per hour, i.e. cases of no importance for planning. Above 1000 vehicles per hour, most of the data fall to the left of the curve, i.e. typical speeds are lower than modeled speeds.

More critical yet, the volume-delay curve is extrapolated way beyond any observed traffic volumes. Extending a fitted curve past the range of observed data is poor practice. Southern California drivers daily conduct real-world experiments where they try to push traffic volumes up as high as they can. As shown in Fig. 2 and discussed above, the upper limit appears to be about 2300 vehicles per lane per hour, and even volumes this high are rare and likely short-lived. An accurate model should not forecast volumes above 2300 vehicles per lane per hour.

In the southbound AM peak hour case, the modeled 2009 base year volumes for different segments lie between the blue arrows labeled “2009.” These volumes (2035–2294 vehicles per lane per hour) appear to be somewhat unrealistically high relative to the data. This discrepancy may have been caused by adjustments to traffic counts and/or model errors.

The modeling for the 2040 alternatives is impossible without some major change such as transitioning to an autonomous fleet of vehicles. The forecast flows in vehicles per lane per hour are 2757–2979 in the preferred build alternative and 2905–3201 in the no build alternative. These model results represent impossible traffic conditions. All that is demonstrated is that the model is incapable of modeling the future accurately.

Nevertheless, this STA modeling is the basis for the entire Traffic Report. The remainder includes analyses for freeway mainline level of service, ramp level of service, ramp and ramp-freeway junction level of service, weaving, intersection level of service, queueing, and storage. These analyses all rely on the impossible traffic volumes from the STA model, and therefore are inaccurate representations of future traffic conditions.

The Traffic Study is the foundation for the entire Environmental Impact Statement (EIS). The EIS shows large time savings because the No Build scenario travel times are unrealistically long. The Build alternative travel times also are unrealistically long but look great by comparison. A commonly-used metric in modeling studies is vehicle hours of delay (VHD), which is calculated as vehicle hours of travel (VHT) minus vehicle hours of travel if free-flow speeds could be maintained. In the Traffic Study, VHD in the project area is forecast to increase a factor of 21.6 between 2009 and 2040 in the no build alternative, but only by a factor of 3.0 in the preferred alternative. This reduction is the core rationale for the project.

1.3. These issues are general to modeling in all regions

The Los Angeles region is particularly congested and continues to have strong population growth. The issues discussed above are more extreme in the Los Angeles region than in most other regions. However, traffic volumes typically exceed capacity on key roadway segments in STA models of less congested regions, even in base year conditions. Volumes exceeding capacity are commonplace in forecast horizon years in models for most regions.

Therefore, the problems described above are present in future freeway studies in all regions. Most freeway forecasts are from STA models. The only exceptions are studies that instead use trend extrapolations. Whether forecasts are derived from STA models or trend extrapolations, freeway expansion studies generally forecast traffic volumes that exceed capacity in the No Build alternative.

These issues also are present in long-range regional transportation plans because they all rely on regional STA models (Cambridge Systematics, 2012). For example, the 2015 regional transportation plan for the greater Austin region states that regional VHD per person will increase by a factor of 4.5 between 2009 and 2040 in the no build alternative, and by a factor 3.0 if $35 billion is spent on capitalimprovements (primarily freeway expansion) (CAMPO, 2015). Examination of model files shows modeled freeway traffic volumes that are often much higher than capacity in both 2009 and 2040. These impossible traffic volumes produce inaccurate VHD estimates. Slower-growing regions forecast less extreme growth in VHD than does the Austin region, but the general pattern is the same.

Cost-benefit analyses are not always done for roadway projects, but when they are done, they typically are based on VHD. Each hour of delay is assigned a monetary cost. Therefore, inaccurate VHD estimates translate directly into inaccurate monetary benefits.

1.4. Study: dynamic traffic assignment

DTA simulates bottlenecks and delays behind bottlenecks realistically. Studies that have compared STA and DTA for the same case study have found significant differences in model performance measures. Boyles, Ukkusuri, Waller, and Kockelman (2006) concluded: “The results indicate that traditional static models have the potential to significantly underestimate network congestion levels in traffic networks, and the ability of DTA models to account for variable demand and traffic dynamics under a policy of congestion pricing can be critical.” Much of network congestion results from backups behind bottlenecks. STA allows traffic volumes to be over-assigned at the bottlenecks, so this congestion is obscured in STA models. In a study of choice between managed lanes (ML) and general-purpose lanes (GPL) by the Florida Department of Transportation, it was concluded that: “the difference in the travel time of using the GPL or the alternative ML, and the resulting number of travelers that decide to choose the ML, is considerably underestimated by static assignment” (Florida DOT, 2013). Managed lanes typically have tolls, at least for single-occupant vehicles. If the STA model allows over-capacity assignments, many travelers in the model will choose the over-capacity free route over the toll route. Limiting the free route to capacity properly forces more travelers into the managed lanes.

1.5. Portland Maine regional model case study

The Portland Maine region’s Metropolitan Planning Organization (MPO) is the Portland Area Comprehensive Transportation System (PACTS). I recently completed an update of the PACTS regional transportation model that includes both STA and DTA implementations (Marshall, 2018). This may be the first official MPO regional travel demand model in the U.S. with a DTA option. The PACTS model includes two parallel models that use the same base data. Each model is a standard four step model involving trip generation, trip distribution, mode choice, and assignment. The day is split into four periods (6–9 AM, 9 AM-3 PM, 3–6 PM, and 6 PM-6 AM), and the peak periods are further factored to AM and PM peak hours. Each model feeds back congested travel times to trip distribution. Therefore, the DTA trip tables are somewhat different than the STA trip tables. In the default mode, both DTA and STA are run with feedback for every scenario.

The DTA component of the model uses the open source package DTALite. The DTALite developers state: “DTALite, an open-source mesoscopic DTA simulationpackage, in conjunction with the Network eXplorer for Traffic Analysis (NeXTA) graphic user interface, has been developed to provide transportation planners, engineers, and researchers with a theoretically rigorous and computationally efficient traffic network modeling tool”. DTALite uses a queue-based approach (Zhou & Taylor, 2014).

While DTA sometimes is done with detailed inputs including traffic signal data and lengths of turning lanes, the Portland regional DTA application uses only standard regional model inputs: roadway segment length, number of lanes, free-flow speed, and capacity. In general, the capacity per lane per hour is set from a lookup table based on functional class. However, the capacity was adjusted in a few cases where there were special geometric circumstances including extremely limited weaving distances.

Relative to the Los Angeles and Austin regions, the Portland Maine region is relatively uncongested. The most congested case is a summer Friday afternoon, when weekend travelers and other tourists are traveling through the region to and from recreational areas to the north. This peak period is the focus of this case study.

The region has two major freeways: the Maine Turnpike (I-95) and I-295, which branches off to the east of I-95, passes through central Portland, and then continues north along the coast. Both freeways have closely-spaced interchanges in the central part of the region, with spacing of about 1 mile on the Turnpike, and 1 mile or less on I-295. Tolls on the Turnpike are relatively low compared to elsewhere in the U.S. (FHWA, 2015), especially for those using electronic toll collection (the majority). Both freeways are used extensively for local traffic. The updated model is being used by the Maine Turnpike Authority to evaluate widening alternatives, and by the Maine Department of Transportation to evaluate I-295 widening alternatives. There is a rich set of traffic counts for these facilities, especially for the Friday afternoon peak hour.

Fig. 4 compares base year (2015) model volumes for freeway mainline sections and ramps to counts/estimates for both the DTA and STA models. The base STA model (labeled “STA_med”) uses the STA capacity values and volume-delay curve from the previous version of the PACTS model. As shown in Fig. 4, these values lead to significant over-assignment of the ramps. Therefore, a “STA_high” case was implemented where hourly capacity values are reduced by 25% and the same volume-delay curve is used. This drives ramp volumes down some, but not enough to match the counts.

Fig. 4

Fig. 4. 2015 summer weekday model volumes versus counts (% difference).

Fitting the exact pattern of on ramp and off ramp traffic volumes is challenging given the close spacing of ramps in the Portland region. In addition, the geographic locations of employment data are not very accurate. Given these limitations, the DTA model does reasonably well as shown in Fig. 5. Root mean squared error (RMSE) is a commonly-used metric for evaluating the fit between regional transportation models and traffic counts. Lower RMSE represents a better fit, and the fit is expected to be better for higher-volume roadways than for lower-volume roadways. In the PACTS region, the average mainline freeway volume is about 30,000 vehicles per day, and the average ramp volume is about 5000 vehicles per day (summer). Based on guidelines from Florida, Ohio, and Oregon, the maximum acceptable RMSE values for these average daily traffic volumes are about 25 for the freeway mainlines and 45 for ramps (Cambridge Systematics, 2010). Both the DTA and STA models meet the standards for freeway mainline volumes. Only the DTA model meets the standard for ramps.

Fig. 5

Fig. 5. 2015 summer weekday model volumes versus counts (RMSE) – red bars show maximum acceptable values.

Validation of model volumes to counts is almost always done for freeway mainline sections but seldom is done for ramps. Cambridge Systematics, 2010 illustrates validation statistics for freeways, expressways, principal arterials, minor arterials but not for ramps. A recent thread on the Travel Model Improvement (TMIP, 2018) on this topic included only four posts (TMIP, 2018). Here are some excerpts:

We have had some debate here recently about whether ramp counts should be included in assignment validation checks … we have heard of an old “rule of thumb” that ramp counts should not be used

Many models tend to substantially over-estimate ramp volumes for a variety of reasons, even if the highway mainline volumes validate well (basically, too many trips assigned via the highway for short distances, and not enough for longer distances).

… use them for validation reviews but not present them in the regional validation performance summary results

These comments are consistent with the comparisons shown in Fig. 4Fig. 5 where the STA model over-assigns ramps, on average. STA’s poor performance in matching ramp volumes results from STA overestimating the number of short trips that jump on and off freeways. STA does not constrain ramp volumes properly. This problem is even worse in forecasts because the ramp volumes continue to grow beyond capacity.

Matching base year counts is required for model confidence, but it is not the reason for modeling. Modeling is done to evaluate future alternatives. In this case study, two 2040 alternatives are tested:

1)

No Build – no changes in road system

2)

Freeway Widening – adding one lane in each direction to every freeway segment

Neither alternative is intended to be realistic. It is unlikely that no roadway expansion will occur between now and 2040. On the other hand, it also is unlikely that the entire freeway system will be widened. There is no such proposal on the table, and costs and environmental impacts would be prohibitive. These alternatives have been developed to give a clear test of freeway widening in the model.

The greater Portland region is growing slowly. There is only a 15.3% increase in trips forecast between 2015 and 2040. Like many regions in the U.S., trends have shifted away from decentralized growth toward mixed use redevelopment and infill. Therefore, both the DTA and STA models forecast VMT growth that is even less than the 15.3% increase in trips.

Fig. 6 shows the change in VMT per trip for the No Build and Build alternatives for the entire day and for the PM peak hour.

Fig. 6

Fig. 6. Change in VMT per Trip 2015–2040.

As shown in Fig. 6, VMT per trip is forecast to decrease or stay the same in all cases. The daily decreases are smaller than the PM peak hour decreases when traffic growth is moderated by congestion. The STA models forecast significant growth in VMT in the Build scenario relative to the No Build scenario because higher mainline freeway speeds in the STA model encourage longer trips. The DTA model constrains traffic at ramp bottlenecks at ramps, and ramp capacity has not been increased in the Build alternative. Therefore, there is little change in travel patterns in the DTA model between the alternatives.

Fig. 7 shows the change in VHT per trip for the No Build and Build alternatives for the entire day and for the PM peak hour.

Fig. 7

Fig. 7. Change in VHT per trip 2015–2040.

Fig. 7 shows average travel times decreasing in all cases. The DTA model shows only a small reduction in VHT per trip in the Build alternative relative to the No Build alternative because only widening the freeway mainline without addressing bottlenecks does little to change travel patterns in the model. The VHT reductions calculated for the Build alternative are much greater in the STA models. Small calculated changes in freeway speeds in the STA models add up to a large number at the regional level.

Fig. 8 shows the change in VHD per trip for the No Build and Build alternatives for the entire day and for the PM peak hour.

Fig. 8

Fig. 8. Change in VHD per trip 2015–2040.

As shown in Fig. 8, there are large differences in the VHD forecasts between DTA and STA, particularly in the No Build alternative. For the entire day, the DTA model forecasts a 5.9% increase in VHD per trip between 2015 and 2040, but the STA models forecast 35.5% and 44.7% increases. The same pattern is present in the PM peak hour. The DTA model forecasts a decrease of 1.0% per trip, and the STA models forecast increases of 23.2% and 28.9%. The STA model VHD increases per trip (23% to 45%) are greater than what might be expected given that the daily VMT is forecast to increase by only about 10% between 2015 and 2040.

2. Discussion

In 1992 Anthony Downs coined the term triple convergence to describe how peak period traffic congestion is inevitable because drivers will compensate for capacity increases by (a) shifting routes, (b) shifting travel time of travel, and (c) shifting travel mode (Downs, 1992). After capacity expansion, the new equilibrium will be just as congested as the old equilibrium. Much of peak period shifting routes is from the freeway system to the street system.

In the DTA model, travelers have already shifted to the local street system during peak conditions in the base year model due to ramp bottlenecks. This has not happened in the base year STA models, and this explains why the ramps are over-assigned in the STA models. In STA, the shift to the street system does not occur until the calculated ramp/freeway system speed is equal to the street system speed. Consider a simplified case where the freeway system has a free-flow speed of 60 m.p.h. and the street system has a free-flow speed of 30 m.p.h. One mile of uncongested travel takes 1 min on the freeway system and 2 min on the street system. With triple convergence, the freeway system slows to the street speed and now also takes 2 min per mile. As discussed above, this generally does not happen in STA models until the system, and especially ramps, have modeled traffic volumesexceeding capacity. When the freeway system takes 2 min per mile, 1 min of the 2 min is labeled delay. By over-assigning ramp and freeway traffic, the STA model exaggerates growth in VHD.

In the Build Alternative, the DTA model shows moderate reductions in VHD per trip: 3.3% for the daily case, and 4.7% in the PM peak hour. The reductions are much greater in the STA models: 22.7% and 24.7% in the daily case, and 23.8% and 25.7% in the PM peak hour. These large forecast reductions in STA are an artifact of the inflated No Build VHD forecasts with STA. The VHD numbers are much smaller than in the I-405 case presented above, but the pattern is the same. STA exaggerates traffic problems in the No Build alternative, and exaggerates benefits in the Build alternative. This general pattern is present in all STA alternatives modeling.

STA models under-account for importance of ramps. In general, ramp volumes are low so do not present a large share of the VMT, VHT, or VHD. In addition, they are short in length, and the VMT, VHT, and VHD calculations are all proportional to length. If a ramp is 1/10 of a mile long and modeled to operate at 5 m.p.h., the modeled time is only 1.2 min. This is not enough to deter modeled vehicles from using the ramp to access a higher-speed facility. This causes STA models to over-assign ramps (as is illustrated in Fig. 4).

Freeway mainline roadways dominate the VMT, VHT, and VHD statistics relative to ramps because of the much larger mainline VMT relative to ramp VMT (in the Portland model, a factor of 15). Given the monotonic form of the volume-delay function, any reduction in mainline volume per lane per hour results in decreased VHT and VHD. Even small percentage reductions add up to big totals. If a 1% reduction in mainline travel time is coupled with a 5% increase in ramp travel time in the model, overall travel time is still lower, even though the increased modeled ramp travel time likely pushed multiple ramps over capacity.

The DTA model appropriately shifts traffic away from bottlenecks in all alternatives, and VHD differences between the alternatives is relatively small. The DTA model is a better basis for planning.

2.1. Microsimulation does not address STA problems

In the I-405 study, STA outputs were used for analyses of traffic volumes and speeds, but it has become increasingly common to combine STA with microsimulation, and to rely more on the microsimulation outputs. This approach has intuitive appeal because microsimulation models individual vehicles and maintains capacity constraints. However, most if not all, such STA and microsimulation efforts are fatally flawed because the inputs to the microsimulation model are taken from the STA model and these inputs are inaccurate. (Microsimulation is too data-intensive and computer-intensive to be applied on a regional basis as a regular practice.)

Sometimes the forecasts are linked at the roadway segment level. Other times, subarea trip tables are extracted from the STA model. Either way, the microsimulation model area generally is so limited so that forecast trips from the STA model have only one path through the microsimulation network. If the forecast volumes exceed the capacity in the microsimulation model, the vehicles will queue outside the model boundary.

As discussed above, these over-capacity volumes are impossible. Therefore, the excess queuing in the microsimulation model also is unrealistic. Often the STA forecasts are so unrealistically high that the microsimulation modelers end up making one of two types of unjustified assumptions:

1)

Capacity upstream and downstream of the project area is assumed to be unconstrained – even though actual capacity constraints upstream and downstream would significantly limit flow through the project area.

2)

The STA forecasts are arbitrarily reduced so that the Build alternative works while the No Build alternative does not.

The second type of assumption is less common but does occur. Here is an example from a study in the Charlottesville region:

“After the initial runs, it was observed that peak period flows in the PM peak exceed the capacity of the VISSIM model for US 29. In order to provide an improved comparison between alternatives, the PM peak period volumes were reduced 10 percent to reflect peak hour spreading.”

(Parsons Brinckerhoff, 2013)

Without the reduction, the Build alternative exceeded capacity. The 10% number was arbitrary, but was in a range where the Build alternative traffic volumes were below capacity, and the No Build alternative traffic volumes still exceeded capacity. This makes it appear that the 10% number may have been selected to support the Build alternative.

Unlike STA, regional DTA will produce subarea trip tables and roadway segment forecasts that are generally consistent with roadway capacity. Therefore, outputs from regional DTA models can be used effectively along with more detailed subarea microsimulation models.

2.2. STA modeling supports inefficient investment in freeway capacity

Freeway expansion projects generally are justified based on metrics from regional STA models, and these models show false benefits. These false benefits forecasts encourage inefficient investment in freeway capacity.

The term induced travel has been used to include the three triple convergence effects discussed above, plus shifts in destinations, and longer-term shifts in land use. A review of the induced travel research by Handy and Boarnet (Handy & Boarnet, 2014) concluded that induced travel is real and that the magnitude is sufficient to prevent capacity expansion from reducing congestion: “Thus, the best estimate for the long-run effect of highway capacity on VMT [vehicle miles traveled] is an elasticity close to 1.0, implying that in congested metropolitan areas, adding new capacity to the existing system of limited-access highways is unlikely to reduce congestion or associated GHG [greenhouse gas] in the long-run.”

Almost everyone complains about congestion in their own region, but it is only recently that we have been able to accurately compare congestion across regions using data collected from cell phones and toll transponders. In statistical analysis of congestion data across 74 U.S. regions compiled by INRIX, I found that the amount of freeway capacity in a region is unrelated to the amount of congestion (Marshall, 2016).

Despite strong evidence that expanding freeway capacity is ineffective in reducing peak period traffic congestion, transportation engineering practice throughout the United States is primarily focused on expanding freeway capacity to reduce congestion. This practice is justified by metrics taken from outdated and inaccurate regional transportation models that show false benefits.

3. Conclusions

3.1. Implications for managerial practice

Road construction proponents often highlight the high cost of congestion, both nationally and within states and regions. Generally, the cost estimates are based on VHD estimates from STA models. These economic pitches are attractive to selected officials, the business community, and the public. These arguments are critical to gaining public support for projects that are expensive and have large negative impacts, including multiple years of construction delays in many cases.

It is recommended that stakeholders be more skeptical about these studies, and demand better modeling and more accurate metrics. The amount of money spent on modeling is a very small percentage of the project cost. Better modeling and better projects would make tax expenditures more efficient. In addition to the direct efficiency, avoiding inefficient freeway expansion would avoid large tax base losses through destruction of property and adverse impacts on adjacent properties.

3.2. Contribution to scholarly knowledge

The limitations of STA already are well understood within the DTA research community. These limitations need to be more fully appreciated by STA practitioners and transportation agency leaders.

Most DTA research and application is focused at a subregional basis. This subregional approach fails to realize the full promise of DTA as a replacement to STA. The Portland case study demonstrates that regional DTA is both practical and useful. It is recommended that more DTA research is focused on regional DTA. It also is recommended that regional DTA be implemented wherever practical and used to replace STA in both regional planning and project studies.

References

Oregon regulators require public utilities to adopt plans for electric vehicles

By Chet Edelman

While electric vehicles only make up a small share of the current U.S. vehicle fleet, by 2040 they are expected to comprise approximately 55 percent of all new vehicle sales. Accommodating for growing EV demand, however, will require major changes in how utilities supply electricity. At the moment, the electrical grid is simply not equipped to handle widespread EV adoption. In Oregon, regulators are attempting to address this problem. The state Public Utility Commission recently implemented a new rule requiring all public utilities to create a transportation electrification plan. By pushing public utilities to incorporate EVs into their long-term strategies, government officials hope to not only accelerate EV adoption but also ensure current utility infrastructure can meet new demand.

Within each transportation electrification plan, public utilities must outline a number of actions including investments in infrastructure, rate design, programs, and services. The Public Utility Commission aims to use these reports to gain a better understanding of what an EV-centric future in Oregon might look like. How much public charging will be needed? How should charging costs be allocated among customers? Currently, no state is in a position to answer these questions. While places such as Hawaiiand California have explored transportation electrification plans, there remains a substantial coordination gap between utilities and regulators.

Given that EVs are likely to proliferate in the near future, Oregon’s new rule may set a precedent for other states to take a proactive approach to transportation electrification. Thinking about how to address transportation electrification now ensures utilities are not scrambling to meet demand in the future.

Chet Edelman is a Project Assistant at SSTI.

New study looks at the system-level factors that impact BRT ridership

By Brian Lutenegger

new study by researchers at Hanyang University in Korea and the Georgia Institute of Technology examines the factors that affect bus rapid transit (BRT) ridership at the system level. The researchers’ analysis examined 111 BRT systems around the world. Service supply levels—including fleet size and the number of BRT corridors within a city or region that could be utilized to complete a trip—are important determinants of ridership. Systems with multiple connected lines could increase ridership by 41 percent. Further, adding both integrated fare collection and real-time information systems can together boost ridership by 47 percent.

The study also pointed to the need to improve travel-time reliability and speed to improve ridership. Infrastructure such as passing lanes and median bus lanes can improve this metric.

The Institute for Transportation & Development Policy (ITDP) has also attempted to identify the best practices in BRT design. Their BRT Standard was initially released in 2012 and later updated in 2014 and 2016. It reviews selected metrics for BRT corridors around the world using a scorecard, ranking the best corridors as Bronze, Silver, or Gold.

ITDP’s BRT Standard takes into account some of the lessons of this newer study.  In scoring BRT systems, it offers points for BRT systems that have multiple routes on the same corridor and multiple intersecting corridors that create a network. Further, BRT systems with off-board fare collection are ranked higher, with barrier controls and proof of payment worth more points than an onboard fare verification system.  Finally, the ITDP deducts points from systems that do not meet a minimum average commercial speed of 12 mph.

The study by researchers in Korea and the U.S. points once again to critical aspects of BRT that planners and transit operators need to consider in order for their systems to be successful. Attracting riders—particularly new ones who might otherwise drive by themselves in their own vehicle—is an important goal of BRT and other transit modes, and the factors cited in each of these documents will improve the chances of attracting these key customers.

Sidewalk evaluation app Project Sidewalk launches in Seattle

By Michael Brenneis

Project Sidewalk, newly launched in Seattle, is crowdsourcing the evaluation of sidewalks and ramps with the intent to help DOTs locate and prioritize needed repairs and improvements, educate the public, and collect data to train AI. Poorly planned sidewalks and ramps, those in disrepair or with other impediments can dramatically reduce the mobility of people with disabilities and decrease walking accessibility.

After a brief tutorial, and using Google Street View, users systematically click around the city, identifying and evaluating curb ramps, sidewalk obstacles, and uneven sidewalk surfaces. During the Washington, D.C., pilot users placed labels on “more than 205,000 good and bad pieces of sidewalk over 18 months” as reported by Crosscut. The Seattle and Newberg, OR, versions of Project Sidewalk got underway in April.

The gathered data could eventually be incorporated into interactive routing software such as Access Map, which is aimed primarily at helping sidewalk users maximize their mobility. Project Sidewalk hopes to make its data available to city maintenance and planning agencies to improve their operations. They also intend to build a dataset robust enough to use to “train machine learning algorithms to automatically find accessibility issues” in street view images. Project Sidewalk may have the added benefit of educating citizens about the impediments faced by those with mobility issues and engaging citizens in the cause of improving the infrastructure of their cities.

Crowdsourced human intelligence tasks, such as sidewalk evaluation, can be vulnerable to malicious intent. People can make mistakes. The data can be compromised, or of poor quality. As with any form of data collection by humans, various types of bias can be introduced. To combat this, Project Sidewalk includes a validation component where users can examine other users’ work. The developers have also conducted field reconnaissance, finding that users’ evaluations are about 72 percent correct. An interesting follow up would be to see if this number could be improved by increasing the amount of training received by users, or by introducing other safeguards against misevaluation.

Transportation agencies can be slow to adopt or adapt these cutting-edge technologies for their own uses, continuing to rely largely on field observation and physical audits to assess the condition of infrastructure. Concerns over data quality, completeness, and accuracy seem to be paramount. But crowdsourced data could be used in combination with primary city data, such as maintenance or asset condition records, to prioritize areas in need of further physical inspection.

It’s an exciting time for crowdsourcing in the transportation field. Apps such as Ride SpotCarbinStrava, and Placemeter, among others, are collecting crowdsourced data from user observations or smartphone sensors that developers can leverage in imaginative ways. Analysts have access to many crowdsourced data sets (Open Street Map, for example) that are very useful for research purposes. From routing cyclists to reduce traffic stress, to routing cars for fuel efficiency, developers are incorporating crowdsourced data to conserve resources and improve the mobility and experience of bicyclists and sidewalk users of all abilities.

Michael Brenneis is an Associate Researcher at SSTI.

Safety and speed management: Speeding into a crash?

By Saumya Jain

According to a recent study conducted by the Insurance Institute for Highway Safety (IIHS), in the past 25 years 37,000 additional people have died due to increased speed limits in the United States. However, Canada is taking a very different approach to speed, as detailed in the April issue of ITE Journal, which is dedicated to safety through speed management.

This month’s ITE journal is focused on Vision Zero and speed management and describes Canada’s Safe System’s Approach to Road Safety, which has been very successful in deterring crash rates across the country. The approach involves implementing evidence-based measures on four different levels:  drivers, safe speeds, safe roads, and safe vehicles, with safe speeds being the most critical player. It is an important issue, especially when a number of states in the U.S. are considering increasing speed limits to match the 85th percentile.

In 2014, British Columbia increased speeds on more than 800 miles of rural road, some up to 75 mph. A review of the post-implementation performance of these highways using speed and safety data showed serious-injury crashes increased by 11 percent . The review led to reversing more than half of those speed limit increases and the implementation of a variety of safety improvements.

Source: Photograph by Johannes Rapprich

Meanwhile, in the U.S. IIHS looked at how increasing speeds have led to increased fatalities. Based on past crash data, IIHS has established that an increase of 5 mph can increase highway and freeway fatalities by almost eight percent and can increase fatality on other roads by three percent. An increase in crash impact speed from 20 mph to 30 mph puts a pedestrian at a five to eight times increased fatality risk. With such a sensitive relationship between speed and crash fatality risk—especially at the slower speeds normally found on local streets—speed management and safety decisions need to be made very carefully.

The relationship between high speeds and serious injury or fatal crashes has been established repeatedly in the past years, but the U.S. and Canada are taking different approaches to speed-related safety. As the author of the ITE article on the Canadian approach writes, “It has been noted with the 85th percentile, drivers should not set speed limits, but speed limits should be set based on the biomechanical tolerance of blunt-force trauma.”

Saumya Jain is a Senior Associate at SSTI. 

Cambridge enshrines protected bike lanes into law