Environments Studio: Project Two

11/27: Campus Tour

For this project, I get to look 10 years into the future and imagine the Microsoft Hololens (or similar mixed reality technology) as a common device that is widely available to use on a campus tour.

Since we are redesigning/enhancing the campus tour experience, it makes sense that our first step in research is to attend an existing CMU campus tour. Although our guide tried her best to be inspiring and helpful, the tour was freezing and it was snowing and felt painful and short.

Part of our first assignment was to write up persona profiles for people who would possibly be attending a campus tour. We looked at the four different types of personas (goal-directed, engaging, role-based, and fictional) and to me this feels largely like fictional personas since we aren’t referring to any concrete data. It feels somewhat unnatural creating a persona of a prospective parent on tour, since I haven’t spoken to one directly anytime recently, but thinking about the student’s feelings is a little easier because I literally was in their position less than 2 years ago.

These are the questions I set out to answer with my persona mock ups:

  • How are John, Jessica and Judy feeling before they start the tour?
  • What are they each hoping to learn?
  • What are their underlying needs that we’re trying to fulfill?
  • What do we want them to say about the tour experience once they’re done?
The 3 initial personas I created. Now, it’s time to design an experience that can accomodate all of them.

11/29–12/2: Initial ideas/How in the world do you prototype this?

I think one of the most helpful parts of the going on the campus tour was being able to talk to the guide afterward. She’s a sophomore too, and she told us a little about her experience: how weather ends up being a big factor in how talkative the group is, how she much prefers groups that are eager to ask questions along the way, and that sometimes on the weekends she’ll be leading over 100 people all around campus. She also mentioned that the tour was recently redesigned (which makes sense, since her version was much shorter than the one I initially went on), and that now they were more focused on talking about a certain tenet of CMU culture/mission, such as interdisciplinary learning or connectivity to Pittsburgh, and not repeat anything that people could just find on the website. I think this is an important concept to think about as I design a tour, since I have to make decisions about what I want people to understand through my experience and what I want to leave for them to discover on their own.

I started storyboarding. I find myself facing a similar struggle to the museum exhibit design project in that storyboarding an interaction is challenging and time consuming on paper. It’s hard to brainstorm different interaction techniques when I’m spending too much time worrying about whether my human’s hands are to scale.

I did find that the most progress I can make iterating interactions is when I am working with another person in space. We also learned about a couple different tools to prototype with: there’s a Vive Pro and the Oculus Rift, which I don’t yet understand how to customize, and then theres a VR app called “Torch” that I think I might focus on. Here’s an interaction I made using the app with a cherry you can tap.

Ok, it’s weird. But now I know how to create interactions with objects in space.

My main ideas include: A supplementary tour experience that features 3D rendering of student activities on campus, like buggy and booth building and architecture projects and art installations, etc.; Andrew Carnegie leading you around and dropping campus history facts along the way (not my favorite!); A stand-along tour experience when you customize your tour before you walk and then continue

After our first “crit” (we just talked over initial storyboards), I realized I am only going to prototype on stop on the tour, so I need to focus on making an interaction that I can flesh out in a specific space.

I’ve been looking into the different affordances of AR, and trying to figure out how to take advantage of them in order to enhance the tour experience. Key word is enhance, since I definitely don’t want to create an experience that distracts from the tour or is so detached from reality that it doesn't really reflect an important part of the CMU experience.

I do like the idea of being able to get more information on objects in your environment, kinda like this smart camera idea from Google:

https://techcrunch.com/2018/05/08/google-makes-the-camera-smarter-with-a-google-lens-update-integration-with-street-view/

This also got me thinking about what the people would be interacting with specificially, and because I’ve been thinking of a lot of different tour versions that involve comments/reviews, I’m now considering using type as objects. I found an example of this using the Hololens.

This is an editable version, and definitely more of a display type than say a comment or body copy, but it’s interesting to see what color and weight works best in augmented reality.

From this point, I tried to import my own type into the Torch app and see what it’s like. I’m not even onto the interaction part of this yet, but I wanted to customize the type so that it looks like comments. I’m theorizing that people will be able to record quotes by speaking into a personal microphone (built into hololens, accessed through some sort of object rendered near their hands) and then grab and move this type to certain places.

Another video, this time using white text to see if it will show up better. I feel like I’m walking through a sculpture of comments unrelated to the scene.

12/4: In class experiments

I talked about my type idea in class. Floating text is definitely interesting, but I need to further develop my idea in order for the interaction to make sense. Right now, my most current prototype feels more like a sculpture you can walk around than anything else.

Some things in my brain right now:

  • Time stamps appear when you tap on a specific comment
  • the responses underneath a comment appear when you tap on a comment — the issue with this is that it’s kinda ignoring the idea of time within in space and instead hitting you with a timeline all at once
  • A video plays when you tap on a comment. Content related to comment? Taken at the time of the comment?

Regardless of what I choose, I know that I need to take photos of a place where this interaction would take place. I’m thinking of the Margaret Morrison Rotunda.

While talking with Peter, I realized something important about why we are using mixed reality: users should have the option to interact with the augmentations I make to the space, but unlike virtual reality, it won’t force the user to do it. The comments suspended in space should be fun to interact with, but if my user was in a hurry to get to an interview and didn’t have time, they should be able to easily run past them. Maybe the ultimate goal is to have an interaction that can be admired as you pass by and afford an enriching reaction if you stop and give it more time. I wonder if I could accomplish this.

Today I played more with the Hololens. It’s actually a somewhat frustrating tool to use when you are simply trying to replicate the things that you do on a normal computer. Using a web browser, for example, was the worst. Especially trying to type on the keyboard.

Sebastian demonstrating the pinching gesture that is used to select most things right now.

The voice command functions and transcription work well, however. This bodes well for the interaction I had in mind, since it allows users to speak aloud and the software will easily pick up what they said.

Daphne went over what our sketch videos should look like, and I’m putting the notes here to remember:

  • About a minute long
  • What is the driving idea behind the interaction? Why did you make this?
  • How does a user use it? How is is similar/different to the familiar technologies?
  • We are building for a client: impress them with one stop in order to get greenlighted for the entire tour. If there is something important about the onboarding process of the tour, you can include that in addition to the stop
  • Note to self: Take leaps! Push into the future a little bit more! Imagine hololens as a pair of normal glasses, or technology you can’t yet explain to yourself but you can still consider it.

12/5: If words are right up in your face, what should they say?

I went out and took some pictures of the Margaret Morrison rotunda, so I can at least start the process of prototyping in the actual space. I mocked up a quick idea:

I am really struggling on the color of the type. I can’t figure out which one will be visible in space. I think that white might be the least distracting, but I’ll see how the rest of the prototypes turn out.

In the above idea, there would be quotes suspended in the air made up of touring group’s comments. That way, they could interact with the space by placing their words on the campus.

But I talked to some classmates, and prototyped a little but: I’m realizing that when you are touring the school, you don’t want to hear what other visitors are saying. You want to hear what the students are saying. Maybe the professors too, but especially the students. I remember that’s all I wanted to know. I feel stupid after realizing this. I was so caught up in making it interactive with the Hololens microphone I didn’t notice that specific interaction missed the point of the tour.

So now, as opposed to just including possible answers to visitor questions, I’m imagining all of the type made up of current and past student comments and answers. I figure that the placement of the type can rely on the student’s relationship to the building and space: if they are a fourth-year architecture student, they’re quote would hang out in the 3rd floor hallway of Margaret Morrison, while if they are a second year student minoring in IDeAte, their quote could be found in the stairwell down to the basement, or next to the laser cutters. This way, no matter what is going on on campus, visitors will be able to understand what students are thinking and saying while they use this space.

I’m still trying to prototype in the Torch AR app, and it’s challenging to get it to do what I want. I found an article written by a developer there was was a little helpful, but not for the reasons I was looking for. I did learn that if type is white with a little black glow in the background, the readability will increase.

Building on that, I kind of mapped out the kind of interaction I’m thinking of including with the type, with Sophia as my model:

If she grabs/presses on a quote, she can receive information about the person who said it.

I’m trying to figure out a more elegant way to format the information “card,” including an indication that you can take the card and store it/collect it for future use.

Also, if you were able to customize your experience to students who are studying the subjects you are interested in, I imagined a way to look out across the entire campus and see little cues that show whether there are related quotes over there.

The color, or the glowing nature of this orb, designed to draw user into the space

This could be an interesting way to move people to different parts of the campus and show how interdisciplinary a lot of the students are.

12/6: In class working

I prototyped more with Torch AR, and figured out how to make an object change into another object. But in the process of doing this, I had another realization. When a long portion of text is suspended in space, I don’t really read it. I think it’s cool, but I don’t read it. Especially if it’s layered on top of other things.

So now thinking about that, I talked with Daphne and Matt about other interactions that would still enable a user to walk through space and experience first-hand student opinions in a place where they live and work. If I just suspended key words from the audio quote, then there’s the possibility that depressing or conflicting words would be suspended in space and cause visitors to have the wrong first impression, or not want to interact (imagine the rotunda on a cold, foggy day with the words “challenging” and “tough one” decorating the area). So now, I am stuck figuring out what kind of shape/figure/word form/avatar etc. is going to hold these little points of audio.

I like the idea of floating “hot spots: or orbs. I’m unsure how much materiality I want to give them with their rendering.

A quick mockup of the hotspots. This is a little much, but the idea would be that you could select and interact with them, and they would provide audio clips that describe the space.

The actual interaction with these orbs is challenging to completely hash out. I want to take advantage of the virtual aspect of hololens to allows students to select “hotspots” that may appear unreachable: up in windows, on roofs, outside of classrooms they aren’t permitted to enter, etc. But besides just playing audio, I need to figure out how to format the sequence of events for selecting, listening and then understanding who said the quote. Superimposing text or a picture seems a little but odd to me, and maybe not taking full advantage of the abilities of hololens. I played with using words to illustrate the quote in After Effects, but it came out kitschy and just odd:

//awaiting upload

One thing I’ve been thinking a lot about is the translation from objects that are body-locked vs. world-locked ( I learned these terms from Microsoft’s Mixed Reality thread on Medium). Cameron showed me the new Measure app on his phone, which uses VR to plot points in the room and show you measurements of existing 3D surfaces. When you want to examine a specific dimension, the label flips up from resting on a line in space to becoming a panel that fills most of the screen. It switched from world-locked to body-locked through a little animation, but also because suddenly the label is locked to your camera rather than the world. If I want these labels to have the capability to do this, I think in after effects I will need to animate the person’s frame of view moving slightly so it’s clear when the label has become fixed to their vision and not the space.

And what will I even put in these “labels” or nametags or whatever these are?! I’ve again trying to figure out the correct colors and format to contrast the space. I also have to think about how people will save these quotes as well. I sketched an idea using gestures.

Rough sketch, but I’m theorizing above you hand the type “Name Saved” will appear. I am assuming hololens will be able to recognize the user’s hands in any position at this point in the future.

From this point, I drafted a storyboard of the interaction I want to include…

Im curious how natural these gestures will appear when I move to filming them.

..and from there, started making another sketch video in after effects. I had the opportunity now to mess around with how the video and type were going to look when you selected a certain node, and it reminded me of something I read last week. In another part of Microsoft’s Hololens thread, they target placement, where they pointed out that it’s important to think carefully about UI elements and where they are placed in the field, since you have to synthesize your elements with the outside world. By grouping elements together, it’s easier for the viewer to see and select things. So while I was considering putting images/videos and titles and descriptions spread around the environment of the person’s field of view, I think consolidating them will make it easier to read. Here’s where I have them right now:

Grouped elements so it’s easier to recognize the video and read the type.

I also hit a little hiccup in my workflow: I filmed all of my hand gestures in front of a green screen, except it wasn’t a proper green screen and instead made of leftover orange construction paper in studio. No matter how much I try to adjust (although I’m probably doing this wrong), Keypoint thinks the shadow is part of the hand. So I have to reshoot that footage, which slows down my process.

The rest of my messy brainstorm:

12/7–11 Final Process

I reshot the hands, with help from Danny, and figured out how to somewhat accurately key out the background.

Much better!

Also, I originally had plans to make a video that demonstrated what campus would look like when you add motion to it, but in the time I had, I realized I shouldn’t be prioritizing learning motion tracking over finishing the rest of my slideshow. Instead, I focused on mocking up stills of what campus would look like wearing the hololens.

Shots of campus, with interactions.

The yellow blobs in the distance are meant to show where further quotes that are customized to your interest are found. This might be redundant, but I made this slide as well to analyze these markers a little more.

Another issue I struggled with was getting a good audio recording for the interaction in my video. I tried a couple different voices, but it’s challenging to get a genuine recording and not something that sounds read from a script. I realize that if this project were actually to be made, I think I would make sure to build the space from the quotes I gather, and just collect quotes from people speaking genuinely about the space. For now, Sophia’s voice (she begrudgingly agreed to help) worked for the sketch video.

Written by

Designer. Currently at Asana, previously at Khan Academy. Language + Data + Digital + Print.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store