Overview:
Woo Who? is a VR narrative experience in which the guest competes with another suitor to impress the woman on the balcony. The statue of the frog comes to life, and guides the guest to woo the woman by playing violin, juggling, and singing.
Tools: Maya, Unity, Photoshop, Illustrator
Contribution: 3D Modeling, 3D Animation
Platform: HTC Vive
Project Duration: 2 weeks, 2018
Team: Ashely Liang, Saumya Lahera, Yui Wei Tan, Jue Wang, Alexander Woskob
Demo Video
Key Shots
Designing a Story-Driven VR Experience in 2 Weeks
In a team of 5, we successfully prototyped and developed this 3 minute narrative experience in VR. Using the HTC Vive as our platform, guests can walk around in this environment due to room-scale tracking, as well as interact with the characters and props using 2 controllers. Our main goals during the development process were:
Use sound and visual cues to direct users’ attention to key plots
Create intuitive interactions that “feel right” in this world without giving users any instruction
Tell a compelling story using sound, animation, and interaction
Concept Art & Narrative
Inspired by the balcony scene from Romeo and Juliet, our narrative consisted of a start scene, three interaction scenes, and an ending scene. The overall outline of our narrative is listed below:
Guests find themselves standing in a garden facing a frog statue. The statue then comes to life and speaks to the guests
The frog instructs the guests to play the violin with the bow in their hand. The girl on the balcony reacts positively to guests’ performance
The other suitor comes out of the bushes and starts to not only play the violin, but with more skill than the guest. Guests will have to switch their performance to juggling, with proper guidance from the frog
Guests switch to singing when the suitor begins to juggle too
The girl on the balcony comes downstairs and show interests in the other suitor. The story ends when the suitor approaches the guests and demonstrates his interest with a rose.
Concept Sketch
3D Modeling
After sharing my concept sketch with the team and making sure that everyone had the same vision of this world, I started to create 3D models in Maya. To make sure all the important events were noticeable by the guests, we positioned the main characters within 100° field of view.
Indirect Control with Animations, Sound Effects, and Feedback
In order to not break the immersion of this world and make it as intuitive as possible, we didn’t give the guests any instructions throughout the whole experience. But we still wanted their attention to be directed to different spots along the narrative. To achieve this, we added sound cues, dramatic animations, and effective feedback to make sure that they didn’t miss any key moments of the story.
Yui Wei Tan, the other artist on our team, took charge of all the rigging for the characters. We split animating duties between the two of us, and the animations I worked on personally in Maya can be seen below.
Takeaways
Here are some of the lessons I learned from this development process:
Depending on the goals of the experience, characters and props in the world need testing and adjustment to reach the comfortable scale and distance for guests.
Use sound effects, dramatic animations, and responsive feedback to direct guests attention. For example, we found that we could use the woman’s walking animation to direct guests’ attention to the right along with her movements.
Guests are less likely to look what’s behind them. So it’s important to position events and characters in front of them, in this case, within 100° field of view.
Test in VR as soon as the mechanics begin to function.