Autonomous vehicles might remain an expensive novelty,
or they might utterly transform society. Either way, they have
much to teach us about how to look at the cities we live in.
On
a brisk afternoon in October, an oddly-equipped Honda CR-V inched
through London traffic. At the wheel was Matthew Shaw, a 32-year-old
architectural designer; with him was a fellow designer, William
Trossell, 30, and a small team of laser-scanner operators. All were
skilled in their technical fields, but their goal was art. What they
hoped to scan was not just the shape of the city streets but the inner
life of the autonomous cars that may soon come to dominate them.
Shaw
and Trossell have been fascinated by 3-D scanning since they met in
architecture school. There, they investigated how laser scanners
perceive the built environment, including the biases, blind spots and
peculiar insights any such technology must encompass. In 2010, they
started ScanLAB Projects, a London design studio, to widen that
investigation. As they already knew, laser-scanning equipment could
easily be fooled by applying it in inappropriate conditions or simply
misusing the gear. Solid objects, whether architectural ruins,
geological forms or commercial buildings in the heart of London, are
particularly amenable to scanning. Fog banks, mist and afternoon
drizzle, not so much. Yet Trossell and Shaw’s early work was devoted
precisely to this: pushing the technology into unexpected realms where
things, by definition, could not go as planned. Setting up their laser
scanner deep in the woods, they captured low rolling clouds of mist as
digital blurs haunting the landscape; moving ice floes scanned from a
ship north of the Arctic Circle took shape in their hard drives as
overlapping labyrinths on the verge of illegibility, as if the horizon
of the world itself had begun to buckle. These first projects,
commissioned by organizations like the BBC and Greenpeace, have since
blossomed into a new approach: mapping London through the robot eyes of a
self-driving car.
One
of the most significant uses of 3-D scanning in the years to come will
not be by humans at all but by autonomous vehicles. Cars are already
learning to drive themselves, by way of scanner-assisted braking,
pedestrian-detection sensors, parallel-parking support,
lane-departure warnings and other complex driver-assistance systems,
and full autonomy is on the horizon. Google’s self-driving cars have
logged more than a million miles on public roads; Elon Musk of Tesla
says he’ll probably have a driverless passenger car by 2018; and the
Institute of Electrical and Electronics Engineers says autonomous
vehicles ‘‘will account for up to 75 percent of cars on the road by the
year 2040.’’ Driver-controlled cars remade the world in the last
century, and there is good reason to expect that driverless cars will
remake it again in the century to come: Gridlock could become extinct as
cars steer themselves along a cooperatively evolving lacework of
alternative routes, like information traversing the Internet. With
competing robot cars just a smartphone tap away, the need for street
parking could evaporate, freeing up as much as a third of the entire
surface area of some major American cities. And as distracted drivers
are replaced by unblinking machines, roads could become safer for
everyone.
To see how driverless cars might perceive — and misperceive —
the world, ScanLAB Projects drove a 3-D laser scanner through the
streets of London.
But
all of that depends on cars being able to navigate the built
environment. The cars now being tested by Google, BMW, Ford and others
all see by way of a particular kind of scanning system called lidar (a
portmanteau of ‘‘light’’ and ‘‘radar’’). A lidar scanner sends out tiny
bursts of illumination invisible to the human eye, almost a million
every second, that bounce off every building, object and person in the
area. This undetectable machine-flicker is ‘‘capturing’’ extremely
detailed, millimeter-scale measurements of the surrounding environment,
far more accurate than anything achievable by the human eye. Capturing
resembles photography, but it operates volumetrically, producing a
complete three-dimensional model of a scene. The extreme accuracy of
lidar lends it an air of infallible objectivity; a clean scan of a
stationary structure can be so precise that nonprofit organizations like
CyArk have been using lidar as a tool for archaeological preservation
in conflict zones, hoping to capture at-risk sites of historical
significance before they are destroyed.
Lidar,
however, has its own flaws and vulnerabilities. It can be thrown off by
reflective surfaces or inclement weather, by mirrored glass or the
raindrops of a morning thunderstorm. As the first wave of autonomous
vehicles emerges, engineers are struggling with the complex, even
absurd, circumstances that constitute everyday street life. Consider the
cyclist in Austin, Tex., who found himself caught in a bizarre standoff
with one of Google’s self-driving cars. Having arrived at a four-way
stop just seconds after the car, the cyclist ceded his right of way.
Rather than coming to a complete halt, however, he performed a track
stand, inching back and forth without putting his feet on the ground.
Paralyzed with indecision, the car mirrored the cyclist’s own movements —
jerking forward and stopping, jerking forward and stopping — unsure if
the cyclist was about to enter the intersection. As the cyclist later
wrote in an online forum, ‘‘two guys inside were laughing and punching
stuff into a laptop, I guess trying to modify some code to ‘teach’ the
car something about how to deal with the situation.’’
Illah
Nourbakhsh, a professor of robotics at Carnegie Mellon University and
author of the book ‘‘Robot Futures,’’ uses the metaphor of the perfect
storm to describe an event so strange that no amount of programming or
image-recognition technology can be expected to understand it. Imagine
someone wearing a T-shirt with a stop sign printed on it, he told me.
‘‘If they’re outside walking, and the sun is at just the right glare
level, and there’s a mirrored truck stopped next to you, and the sun
bounces off that truck and hits the guy so that you can’t see his face
anymore — well, now your car just sees a stop sign. The chances of all
that happening are diminishingly small — it’s very, very unlikely — but
the problem is we will have millions of these cars. The very unlikely
will happen all the time.’’
The
sensory limitations of these vehicles must be accounted for, Nourbakhsh
explained, especially in an urban world filled with complex
architectural forms, reflective surfaces, unpredictable weather and
temporary construction sites. This means that cities may have to be
redesigned, or may simply mutate over time, to accommodate a car’s
peculiar way of experiencing the built environment. The flip side of
this example is that, in these brief moments of misinterpretation, a
different version of the urban world exists: a parallel landscape seen
only by machine-sensing technology in which objects and signs invisible
to human beings nevertheless have real effects in the operation of the
city. If we can learn from human misperception, perhaps we can also
learn something from the delusions and hallucinations of sensing
machines. But what?
All
of the glares, reflections and misunderstood signs that Nourbakhsh
warned about are exactly what ScanLAB now seeks to capture. Their goal,
Shaw said, is to explore ‘‘the peripheral vision of driverless
vehicles,’’ or what he calls ‘‘the sideline stuff,’’ the overlooked
edges of the city that autonomous cars and their unblinking scanners
will ‘‘perpetually, accidentally see.’’ By deliberately disabling
certain aspects of their scanner’s sensors, ScanLAB discovered that they
could tweak the equipment into revealing its overlooked artistic
potential. While a self-driving car would normally use corrective
algorithms to account for things like long periods stuck in traffic,
Trossell and Shaw instead let those flaws accumulate. Moments of
inadvertent information density become part of the resulting aesthetic.
The
London that their work reveals is a landscape of aging monuments and
ornate buildings, but also one haunted by duplications and digital
ghosts. The city’s double-decker buses, scanned over and over again,
become time-stretched into featureless mega-structures blocking whole
streets at a time. Other buildings seem to repeat and stutter, a riot of
Houses of Parliament jostling shoulder to shoulder with themselves in
the distance. Workers setting out for a lunchtime stroll become spectral
silhouettes popping up as aberrations on the edge of the image. Glass
towers unravel into the sky like smoke. Trossell calls these ‘‘mad
machine hallucinations,’’ as if he and Shaw had woken up some sort of
Frankenstein’s monster asleep inside the automotive industry’s most
advanced imaging technology.
ScanLAB’s
project suggests that humans are not the only things now sensing and
experiencing the modern landscape — that something else is here, with an
altogether different, and fundamentally inhuman, perspective on the
built environment. If the conceptual premise of the Romantic Movement
can somewhat hastily be described as the experience and documentation of
extreme landscapes — as an art of remote mountain peaks, abyssal river
valleys and vast tracts of uninhabited land — then ScanLAB is suggesting
that a new kind of Romanticism is emerging through the sensing packages
of autonomous machines. While artists once traveled great distances to
see sights of sublimity and grandeur, equally wondrous and unsettling
scenes can now be found within the means of travel itself. As we peer
into the algorithmic dreams of these vehicles, we are perhaps also being
given the first glimpse of what’s to come when they awake
0 Reviews