I, robot

William Gibson wrote that “The future is already here – it’s just not evenly distributed.” And even in museums, the future keeps appearing in surprising ways, like the time this past Fall when I got to be a robot at the Museum Computer Network conference.


The author peers out into the world as a robot. Courtesy photo

I’ve long been fascinated by robots. Since I was a little kid, I’ve been both excited and a little creeped out by the idea of machines that could walk and talk and move among us. And I’m apparently not alone in this. Western pop culture is littered with dire predictions of the coming robot apocalypse. And yet we keep making them, and they keep getting more and more interesting.

Museums are one place where robot manufacturers and promoters have long looked to test their wares.  The Tokyo science and technology museum Miraikan has experimented with robot tour guides for years. Here is a link to their recent exhibition on robots.


Photo from recent exhibition Android: What is Human? at Japan’s Miraikan. The exhibition description was to provide visitors with the opportunity to communicate with and operate android robots, while shedding light on the attributes of humans in contrast with those of robots.

Various museums in the U.S. have tried out a number of robotic docents over the years. None have caught on yet. But a new trend in museum robotics has appeared, and it might actually have more of a chance than previous ones. The idea is that of letting people drive what are technically called “telepresence robots” in museums.  The poster child for this is the 2014 IK Prize-winning piece at the Tate called After Dark.


Produced by the UK group The Workers, After Dark featured a number of robotic platforms equipped with cameras and lights. (Picture a Roomba with a monitor mounted on a five-foot-high pole, with headlights, and you get the idea.) People visiting the After Dark website  signed up to drive one of these robots around the Tate after closing time, when all the lights were off. This was fun, allowing people to essentially sneak around the Tate with a flashlight, all from the comfort of their own home. The robots were built specially not to run into artwork, with bumpers to stop them if you drove them into something. And it generated quite a bit of interest.

One of my favorite things about After Dark was that it found a clever way around the phenomenon of “the uncanny valley” that plagues a lot of robotics research. The term refers to a graph of human perceptions of robots. Robots that look nothing like people tend to be viewed neutrally. The more lifelike they get, the more we tend to like robots. But when they get too close to human, our friendly perception drops dramatically. The feeling that “something’s not quite right” can be vividly seen in computer animation.  Figures can appear waxy, or leaden or just don’t move quite right, and we get turned off.


Many reference this 2001 Steven Spielberg movie when talking about a future that includes robots in our daily lives.

Telepresence robots, where a real human is actually controlling the robot, often displayed on a screen, suffer from this problem. The combination of human presence and robotic embodiment can be really unsettling. At the American Association of Museums conference last year, I had the experience of watching a robot vendor trying to engage passersby in the exhibit hall. As the robot would advance on people, the cheerful face on the screen saying “Hi! Wanna hear about our new robot?!” people would invariably move backwards.


Photo from Futurismic.com

As the robot approached to conversational distance, the human would keep backing up, and you’d see this awkward dance play out, usually until the human fled. After Dark skirted this problem by only having the robots out when no humans were around. Simple and effective. Let robots go places where you can’t go otherwise. Like a museum at night. Or a secure collections area. Or maybe a historic house that’s not usually open…

So, what’s it like being a robot? Turns out, much more interesting than I first thought. I sit down at a computer and, using a simple keyboard interface, am soon gliding around the conference. The video and sound quality, much better than I expected, allows for intelligible conversations with people. But I did run into the phenomenon of humans backing away. I’d want my conversational partner’s face to fill my screen, so I’d drive up close to them, causing them to retreat, which would make me want to advance again.

Probably the biggest robot epiphany happened later, when I was no longer the robot. In the Exhibit Hall, I heard a familiar voice, belonging to my colleague Don Undeen. I turned around to see some friends and Don talking. Nothing unusual there, except Don wasn’t at the conference in Dallas. Don was actually in France and was using one of the robots to hang out with us.


A group in Dallas says hello to their colleague in France, who shows up in the form of a robot. Courtesy photo

Watching Robot Don wasn’t creepy at all, it was just Don. Albeit, unexpectedly sneaking up on us in the form of technology.


  1. gail spilsbury says:

    Fascinating post!

  2. Don says:

    It was great fun, and empowering for me to be able to drop in on MCN2014 from France like that. Personally I found that when talkng yo people I already knew, interaction was natural,.I even got hugs!

    Introducing myself to new people was more awkward, as people had to get past the technology to see me.

    And worse, I went on a bit of walkabout and crashed another conference accidentally. The folks in charge of that event were NOT pleased, and much ruder about the mixup than they would have been in person.

    Mostly I’m excited about the access possibilties for this technology, which we’re looking into at the Met.

Leave a Comment