The New Driving Experience

Final Project
DMD 3998 — Emerging Topics in UX
Spring 2021
Predicting the Future


“Hyundai Motor reveals worlds first smart fingerprint technology to vehicles” — JustAuto

In 2021, the driving experience can vary based on the model of your car but today, cars usually have a touch screen monitor as well as a Bluetooth connection to connect your mobile device to.

Car with touch screen monitor

Although, some cars today have dabbled in emerging technologies like the BMW 530i. The gesture control “system works with a camera that is mounted in the headliner above the infotainment system. The camera watches for a limited set of movements the driver can make in that area. Each movement has an assigned function.” Some controls it uses is finger rotation for increasing and decreasing volume and flicking a finger forward to change the radio station.

BMW Gesture Control

In the future, I’d want to see cars utilize more emerging technologies such as gesture control, biometrics, and a more seamless voice UI command system. In this article I will touch upon these technologies in a heuristic evaluation, an accessibility and inclusivity review as well as redesigning the experience in a mockup and prototype.

Heuristic Evaluation

In this section, I will evaluate my new experience as it compares to a typical driving experience with only touch UI.

#1: Visibility of system status

In terms of visibility of system status, I think it’s very important to keep the user informed on what is going on, especially in terms of gesture control UI where users will not be familiar with the basic interface and will have to learn how to use it. I do think the sliding bar graphic on the monitor helps to keep the user more informed on the status of their gesture control.

#2: Match between system and the real world

By using the fingerprint logo, which people are familiar with and will understand it helps the user understand that the wheel is scanning your fingerprint. Other than that, I think that the gesture controlled bars and circles are useful because they are somewhat similar to a touch interface sliding bar. The idea of using a gesture controlled interface with a bar to see how the bar moves with your gesture would be something familiar.

Fingerprint Logo

#3: User control and freedom

A gesture control interface allows the user freedom by being simple and intuitive. A user can ‘exit’ a gesture by just not completing it by producing an error gesture that will not be recognized. I also think that accompanied by Voice UI, the user gets even more freedom by saying short commands that will essentially do anything that the user would want or need.

#4: Consistency and standards

In terms of consistency and standards, designing for a gesture control system is slightly different because there isn’t much of a standard. I think designing gesture control as an extension of touch UI should be done at least for now until gesture control gets more widely used. This means having the ability to use gesture control or touch control in one interface.

#5: Error prevention

Users will want to use a product that is less likely to have errors. In terms of gesture control this means solidifying the gesture movements to make sure they are not too similar or could intersect with another movement. For Voice UI, this means allowing the user time to speak their command and the ability to allow a variance in utterances.

#6: Recognition rather than recall

To reduce the cognitive load on the user, this experience uses design elements such as the fingerprint icon to ensure that the user understands the procedure of scanning their fingerprint on the wheel. Also using phrases that the user already recognizes from previous experiences in terms of the gesture control experience or Voice UI is also very important to allow the user to recognize information rather than having to remember it.

#7: Flexibility and efficiency of use

Creating a multimodal experience where the user can start as a touch interface but then also turn on the gesture or voice interface adds to the different types of interactions that can occur and the many different ways a user can complete one interaction.

#8: Aesthetic and minimalist design

Newer car models have a modern and sleek design on their dashboard. The apple carplay for instance is a good example of a modern design but some other car manufacturers have lackluster monitor displays that could be more elevated. I think just focusing on the logos and keeping to one color scheme could change their looks dramatically.

Logos could be more modernized in first picture // second picture is of Apple carplay which has a nice aesthetic design

#9: Help users recognize, diagnose, and recover from errors

If the gesture motion fails or the biometric scan fails the user must be able to recognize the failure and redo the interaction almost seamlessly. In terms of the biometric fingerprint scan, if the scan fails, the user is presented with an error message saying that the scan failed and to please try again.

#10: Help and documentation

In terms of help and additional documentation, all cars come with an owners manual where there are detailed instructions about anything you’d ever need to know about the features of your car. Specifically for the new experience, the gesture control tutorial is a more interactive form of help and documentation.

Accessibility and Inclusivity Concerns


One concern in terms of accessibility is users with hand tremors performing gesture control. Users who have hand tremors or mobility issues would have a hard time completing specific gesture motions. A way to compare this issue would be to create an alternate setting for people to use gesture control but with easier motions for them to accomplish interactions. It’s also important to note that these users have the ability to turn off gesture control completely and use an interface like Voice UI that can better suit their needs. In terms of users who may have speech disabilities, they could opt out of Voice UI and use gesture control.


Gesture Control, Voice UI, and Biometrics must be made for everyone. There must be rigorous testing involved to make sure that every piece of technology works with all skin colors, skin textures, voice tones and accents, and body shape. Specifically in terms of fingerprint scanning biometrics there must be significant testing involved so that the interaction can succeed from a variety of people with differing dexterity or skin color among other things.

The Redesign

Wheel Fingerprint Scanner

As you open your car and sit on the seat, you place your hands on the wheel and the wheel scans your fingerprint as well as turning on the car. This fingerprint scan is able to capture biometric info on the user like blood pressure, drowsiness, and blood alcohol content. It also indicates if the user has any underlying health problems. This insures that when you are ready to drive, there will be no problems and the user would be safe.

Why is the use of biometrics useful in the new driving experience?

The use of biometrics in the new driving experience allows the user to understand their health and the implications of driving if you are not in good health. if a user has an illness or is tired, biometrics can provide a brief description of their health and warn the user to not drive at this moment. Although invasive and some people wouldn’t be willing to let a device process their health information, the use of biometrics in this way could be extremely helpful for people who have illnesses.

The use of the fingerprint scanner is more frictionless than a traditional ignition with a key because it doesn’t necessarily have to eliminate a key right off the bat and can still provide an easier and more secure driving experience.

Gesture Control

Although originally I wanted this entire interface to be gesture controlled, I decided to make the interface touch controlled with the ability to turn on and off gestures. In this way, I wanted the user to have the best multi-modal experience without being to confused at the start of turning on the car.

The idea behind my gesture system was to actually make gesture control while driving a safer experience. To perform gestures while driving, I wanted the user to have both hands still on the wheel but to extend their index finger out to perform their gestures. While the car is parked, they would be able to perform gestures without their hands on the wheel.

In terms of what interactions users can perform with gesture control, I created a bar system that would allow the user to swipe left to decline or answer “no” or to swipe right or answer “yes”. So the middle highlighted white part of the bar is an indication of the process of the gesture in a way, and to complete the gesture the white part has to move either left or right with the user’s finger or hand.

Another large part of the gesture control experience is the gesture control tutorial where it teaches users to swipe left to right and rotate their finger using gesture control. In this tutorial I emphasized usage of arrows and icons to really make sure a user would understand how to complete motions.

Gesture Control Tutorial

Why is this method better than traditional touch interfaces?

I think that while driving, gesture control would be extremely effective because you do not need to look at the monitor to perform a gesture, like you would touch a monitor. In a traditional driving experience where you might tap a monitor to change a song or change a setting, the user has to look at the monitor and see where they are tapping. With this new gesture control experience, both hands can be on the wheel while your right index finger is performing the gesture.

Voice UI

The Voice UI system is another feature that the user is able to toggle on/off.

The voice UI system uses integrated commands that could be connected to a phone via Bluetooth but you don’t need to touch a phone or monitor to hear these actions. The connection to a phone would also be optional.

The command system would also be fairly simple and would not require any touch or pressing of any button to start the command. The user would simply issue a command and if the system recognized that command from the list of command utterances, then it would respond with an appropriate response.

For example, the user says “Increase Volume,” the Voice UI system would then reply with increasing volume.

Why is this method better than traditional Voice interfaces?

I think it could be better just as an extension of another way to interact with the technology, because as it is it’s not much different than traditional voice interfaces and wouldn’t function that differently from them.


Designing for emerging technologies like gesture control and biometrics is very difficult! I had trouble thinking about how gesture control would work realistically for a lot of interactions just because touch interfaces are so ingrained into my design process so switching to a different point of view was very interesting. Gesture Controls are also very hard to prototype and make understandable and I think even animating the control to see how it would feel in real time could make the difference in the prototype.

Here is my full prototype on the new driving experience!