AI-Based Zoning Posed As The Answer For Smart And ‘Equitable’ Cities
Zoning codes are a century old, and the lifeblood of all major U.S. cities (except arguably Houston), determining what can be built where and what activities can take place in a neighborhood. Yet as their complexity has risen, academics are increasingly exploring whether their rule-based systems for rationalizing urban space could be replaced with dynamic systems based on blockchains, machine learning algorithms, and spatial data, potentially revolutionizing urban planning and development for the next one hundred years.
These visions of the future were inspired by my recent chats with Kent Larson and John Clippinger, a dynamic urban thinking duo who have made improving cities and urban governance their current career focus. Larson is a principal research scientist at the MIT Media Lab, where he directs the City Science Group, and Clippinger was formerly a Research Scientist at the Human Dynamics Group at the MIT Media Lab and is now a cofounder of Swytch.io which is developing a utility token called Swytch.
One of the toughest challenges facing major U.S. cities is the price of housing, which has skyrocketed over the past few decades, placing incredible strain on the budget of young and old, singles and families alike. The average one-bedroom apartment is $3,400 in San Francisco, and $3,350 in New York City, making these meccas of innovation increasingly out-of-reach of even well-funded startup founders let alone artists or educators.
Housing is not enough to satiate the modern knowledge economy worker though. There is an expectation that any neighborhood is going to have a laundry list of amenities, from nice and cheap restaurants, open spaces, and cultural institutions to critical human services like grocery stores, dry cleaners, and hair salons.
Today, a zoning board would simply try to demand that various developments include the necessary amenities as part of the permitting process, leading to food deserts and the curious soullessness of some urban neighborhoods. In Larson and Clippinger’s world though, rules-based models would be thrown out for “dynamic, self-regulating systems” based around what might agnostically be called tokens.
Every neighborhood is made up of different types of people with different life goals. Larson explained that “We can model these different scenarios of who we want working here, and what kind of amenities we want, then that can be delineated mathematically as algorithms, and the incentives can be dynamic based on real-time data feeds.”
The idea is to first take datasets like mobility times, unit economics, amenities scores, and health outcomes, among many others and feed that into a machine learning model that is trying to maximize local resident happiness. Tokens would then be a currency to provide signals to the market of what things should be added to the community or removed to improve happiness.
A luxury apartment developer might have to pay tokens, particularly if the building didn’t offer any critical amenities, while another developer who converts their property to open space might be completely subsidized by tokens that had been previously paid into the system. “You don’t have to collapse the signals into a single price mechanism,” Clippinger said. Instead, with “feedback loops, you know that there are dynamic ranges you are trying to keep.”
Compare that systems-based approach to the complexity we have today…