Skip to content. Skip to departments. Skip to most popular. Skip to footer.

UCLA

Head and Heart

Print
Comments

By Jack Feuer

Published Mar 14, 2018 2:45 PM


Professor Jeff Burke is melding technology with storytelling, artificial intelligence with live theater.


On his current project, Burke collaborates closely with graduate students across a variety of disciplines. Photos by Ryan Schude.

Marry storytelling with science and what do you get? For Jeff Burke ’99, M.S. ’01, M.F.A. ’10, you get a groundbreaking career exploring new possibilities in the performing arts. Burke, associate dean of technology and innovation and professor-in-residence in the theater department at UCLA’s School of Theater, Film and Television (TFT), founded the Center for Research in Engineering, Media and Performance (REMAP), a collaboration between TFT and UCLA’s Henry Samueli School of Engineering and Applied Science. With the support of TFT Dean Teri Schwartz ’71, Burke has embarked on a long list of fascinating projects. In 2014, he received a three-year Google Focused Award on the “Future of Storytelling.” Currently, he and his team are developing a live theater event during which an actress will interact with an artificial intelligence machine learning system.

Where does your fascination with the potential of tech-enabled storytelling come from?

I grew up in Orange County, and my parents have science backgrounds, so I always worked with computers and did software programming. The attraction to the arts came from the collaborative experience of making theater and film as a kid. The first thing I ever directed was a high school production of Tom Stoppard’s play The Real Inspector Hound. Those experiences were special, because you work with other people and ask questions about the world. That always appealed to me and didn’t seem out of sync with scientific inquiry. So when I was looking at universities, I wanted to go to a place that had excellence in both areas.

At UCLA, I studied engineering and did theater in the background and, soon after, media installation and film. Through [REMAP co-director] Fabian Wagmister and other UCLA faculty, I had the opportunity to see how technology was influencing art, and the role that media plays in the live experience. It seemed to me that the artist and storytelling perspectives are critical to humanity, and that has a role in how we envision and push forward technology and the more basic sciences as well. So the opportunity to do both just grew organically.

Tell us about your Google Award.

We received a Google Focused Award for a proposal to look at what it means to develop code in stories, how code can embody a story. That’s not uncommon in game design, but not as common in live performances or cinematic media. We proposed three projects over four years. The first was Los Atlantis, a media archive of a fictional city encountered by a set of travelers. It was a multi-performance installation that allowed cast, crew, designers and even the audience to document their experience, to explore the live experience of a media archive.

The second project went in a different direction. We were interested in exploring a challenge we didn’t get to in the first project: How do you use the data from a media archive to create a more conventional film experience? It involved the creation of a global song interpreted by directors in Los Angeles and Buenos Aires, and a template created by an editor to develop an algorithm that intercuts the story in different ways each time it screens.


At the table (left to right) are Sam Amin (computer scientist), Matthew Johns (designer), Jeanette Godoy (actor), Hana S. Kim (designer) and Ryan Miller (actor). The woman standing at right is actor Brittany Carr.

And the third project — the one you’re currently working on — combines machine learning with live performance.

As we go, we’re trying to simplify the process. Those were ambitious productions. In this final one, we’re focusing on character and the relationship between character and code. What we’re proposing to do is to create what looks like a conventional theatrical play with a character who has suffered a brain injury and lost her ability to retain short-term memory. She receives an AI implant to do that and help her get through the day. The challenge is what is going to happen on stage: What is she going to do? Our objective is to apply contemporary machine learning techniques through the combination of a real actress and a system trying to learn from that actress’ experience.

It’s a two-year development process that includes the creation of a script and software. Think of it as part of dramatic literature, independent of a production or design approach. One track is what does the software and script look like, and a parallel track is figuring out the first production of the piece.

UCLA and TFT are in the forefront of efforts to develop a more efficient, robust Internet for the 21st century. You are the co-principal investigator and application team leader for that project, called Named Data Networking. What is that about?

Besides live performance, there are many other interesting areas where people are trying to understand the relationship between new technology and creative expression and cultural identity. That type of work exists across many domains. I see those projects as similar inquiries to what I do, but my participation in the Named Data sensing project is less an investigation into media impact on live performance or film or even online content. Rather, it is an attempt to reshape our lives. The forces of emerging technology will affect how we tell stories for many years.

Comments