DESIGN RESEARCH

FAND : A foot interface device for four limb interactions

We use hands for all the interactions we perform. How could we design interactions for the foot and use them with hands?
SELECTED PUBLICATION AT SeGAH 2018, VIENNA

A video demonstarting the concept.

FAND (Foot associated neural device) is a haptic interface device that allows users to operate a computer (or certain applications) using all four limbs. The device employs user trained motion gestures that are bound to actions on a compatible application. So, a flick could perform a "next" interaction.

One may be surprised to know that foot is only 10% less accurate than the hand. The interaction model developed in the project divides a task in a primary action and a secondary action. The primary, high priority and accurate task, is performed by the hand, while the secondary tasks are handled by the foot. Devices like these are useful in times when most of our interaction are performed by the hands. This has increased the physiological load on hands.

The possible applications are in applications like creative apps, Walking in Place (WIP) interactions in VR, and gaming. I evaluated the device by comparing it with a novel hand interaction device like pressure pads. In the study it was found that the foot device offers comparable learning and may be considered as a possible modality. The project had a decently long design process of about 6 months.

Context

When was the last time you complained about pain in your hands? Was it because you have been working hard and doing all your using hands? Or you never asked yourself if you could work with your foot as well? I asked myself this question when I observed how much we use our hands in the modern work environment and how little we use our foot.

Majority of the input interaction with computers seem to be happening using only the hands. This has increased the physiological load on the hand. Recent increase in cases of hand related orthopaedic disorders is often associated with excessive use of hands with computers. Majority of extended work performed with computers involves repetitive interactions. Further, our foot lost graspability during evolution but are only 10% less accurate in maintaining position compared to the hands. There may be a possibility of reducing hand load by involving the foot. This would share the total load and improve work efficiency.

Goals

The goals of the project were :
  1. Design and prototype a non personal, haptic foot interaction device
  2. Evaluate use in non accurate spatial desktop environment
  3. Evaluate foot based gestures using the same device
  4. Evaluate application of the device for UI manipulation
  5. Design and implement haptic UI elements
  6. Conduct an evaluation to deeply understand habituation and learning effect of these devices

Research

The research process was divided into several stages, explained briefly here :

  1. Literature Review: I tried to understand the state of the art in foot interactions, beginnings, models of understanding the field of tangible interactions, keywords, major researchers. I then worked on my personal map of the field which I used to position my work in the field after finding gaps in study and context.
  2. User Studies to understand peripheral usage: I recruited students in my university who worked extensively with hands especially the mouse. These were students in Geo information systems (maps and all), nano technology (chip design and all), and design. I observed how they used the computer in their work. The peripherals they used like mouse, track-pad, keyboard, and drawing tablets. I kept a note of things they did on each of these peripherals like scrolls, zooming, screen change etc. I kept a track of how many times they moved their gaze from the screen to search for the peripheral device. This could lead to loss of attention and reduced efficiency. I found that the main interactions were :
    • Scrolling
    • Discrete Mouse clicks
    • Text Entry (unstructured)
    • Menu Navigation
    • Drag and Drop gestures
    I would hence focus on optimizing these interactions via a four limbed interaction model.
  3. Non-participant observation for repetitive Tasks: This was one of the most interesting parts of the research. I received permission to observe the ticket sellers at busy Mumbai local trains. They used hands about 8 hours a day with just a 30 min break. They said that this repetitive tasks has impact on their hand health and they often complained of hand pain. I sat down in a corner and worked on understanding how they used the interface of the dated ticket vending system. These could be my extreme users.

The goal of the initial research was to test waters for the foot interaction device. In the pilot, we found out that 4/5 participants were enthusiastic in trying out a foot based device. Most (4/5) participants used multiple devices when interacting with the software of choice.Two participants complained of neck and back pain. However, they were not sure if foot based controls could be helpful. The felt they needed to stretch their legs and have freedom there, but a foot based device (intrinsic or mediated) might not allow that. One user felt that it may add to more effort as legs would now be engaged.

The initial results may not have been very positive but at least there was hope. Sometimes a little hope is all you need to explore possible scenarios. I chose the approach of using prototypes to explore design situations. So, I started with ideation on possible foot interactions I could build based on my initial research.

The failed ideation stage

These ideations were based on the initial research. Now that I look at it from a learning point of view, I feel I learned more in these failures. This could have been omitted, but I feel, these failures helped me understand the scope better.
I used morphological charts to ideate on the possible options to make a mediated foot device. Mediated means, moved by the user like a pedal. They are easy to adapt to and not very radical. So, these ideations were such that users place their foot on the device to operate it. Some of them were actually very cool.

Photo of morphological chart Morphological chart of possible ways to think about mediated foot devices
Based on these morphologies, I came up with 3 concepts in a group brainstorming session:
  • Doldrums
  • Footscape
  • Footmorph
They are explained below in brief with images.
Photo of Doldrums. Sketched by Vivek Doldrums are a set of bowl like devices that go under the user's feet. They allow the user to perform actions by natural leg movements which are assisted by the self balancing design of the concept.
Photo of Footscape. Sketched by Vivek The device allows the user to rest his foot on a comfortable surface and perform force based actions to zoom or pan. The device allows for a location independent pedal action anywhere on the surface. The surface is suspended on four weight sensors that help perform the interactions.
Photo of Footmorph. Sketched by Vivek Footmorf is a mat that allows the user to perform complex interactions using force and motion of the foot. The shape is derived from the absolute positions of the foot when rotated around the heel and toe. It has pressure sensitive areas on the toe and heel, a pressure joystick to perform 2D tasks and switches at extremities to perform less frequent tasks.

I used these concepts to get user feedback. This would be helpful to know what part of these could be prototyped. The users liked novelty of doll drums, the ergonomics of Footmorf, and the possibilities of interactions with footscape. I used Weighted Decision Matrix (WDM) to decide in on which of these concepts were worth prototyping and testing.

Concept Evaluation

The process of doing a WDM starts with a list of judgement criteria. These criteria in my case were sourced from research papers surveyed in the initial part of the project. These criteria were then made into a pair-wise map that helps in understanding the inter-dependency of these criteria and gives us the weight we use in a WDM. It was super hard to get this done at first.

With the weights at hand, we put them into a WDM to get the scores for each of the concepts. This is shown in the images below.

Photo of mockups A pairwise comparison chart
Photo of mockups A weighted decision matrix comparing the concepts from weights derived from the pairwise chart.
From the above analysis, it was clear that Footscape was "slightly" more useful to prototype compared to others. So, I followed the math and worked on prototyping the same.

Prototyping and Failure

I went ahead and made a prototype of Footscape using Arduino and 4 pressure sensors that tracked the foot based on the relative pressure on them. The algorithm finds out a tilt and then drives the UI. In my case, I tested with Google Maps.
Photo of mockups The second coolest foot interaction device : Footscape in action. Controlling a google maps view.
It was observed that the forces exerted by the users varied and device required frequent re-calibration. Each user varied the force exerted on the mat at different times for the same action. This made sensing the pressure changes difficult.
The major problem that was observed was that it was difficult to control force from the foot. The force exerted may depend on various things like weight, muscle strength and perception of feedback from the device. Continuous application of force is tiring. Users were more comfortable when they are allowed to move their feet the way they wish and only trigger an action when required in a comfortable pose.

In short it was a failure on my part to make a modality that essentially sensed on force which is difficult to control rather than using motion and position that everyone uses (mouse). So, I had to restart by thinking into something that was : not mediated and used motion and position. Thankfully, this taught me a lot on design evaluation.

Restart and FAND

We went back to the drawing board to rethink about the design and device. It seems that a mediated approach fails in the given context due to user’s need to not stay stuck to a particular active area. This is different from situations like a car, where user’s cannot relax. Hence, a contained interaction may add to user stress. It was also observed that several users tend to shake legs when working. This behavior needs to be contained in the design. This particular kind of situation is common to “solution-conjecture” design process. [Ref: Designeraly ways of knowing – Nigel Cross]

The most feasible way to design such an interaction is to use "Gestures". Then I would prototype devices based on the designed gestures. But, How does one design gestures ?

Guessability Study

I conducted a Wobbrock style Guessability study to design gestures. The study essentially shows users some interactions cues on the screen. The users then perform these interactions as if they were causing it. I did it once using both feet and once using the dominant foot. Then we calculate how many users agree to a single interaction using agreement scores. The interaction cues were for scrolling, next-prev, command invocation, and zooming. These were the interactions we found out in the first part.
Photo of mockups A zoom-in/out interaction looks like this
Based on my users (8) the gesture set was as follows. One can obviously spot the similarities between these interactions and interactions performed by hands. Many users agreed that they performed the most natural interactions they could think of.
Photo of mockups The gestures to be designed into a physical device.
I followed a similar process as discussed earlier. I came up with three dominant concepts. They used things like surface wave acoustics to sense motion, a machine learning based concept that uses camera to sense foot geometry and a device that could be clipped to the foot. These concepts were then evaluated using WDM, with the time it would take to build considered. The device that could be clipped to the foot seemed most favorable to build and test. It was called FAND.
With the gesture set available the next process was to "bake" them into an ergonomic device. The device would calibrate with a user and then perform the learned interactions.

But let us see how Hiren would use the FAND :
Photo of mockups Hiren and his context
Photo of mockups He discovers FAND.
Photo of mockups He uses it.

Prototyping

A device was yet again made using Arduino and a motion sensor. The device would sense the users foot movements and learn (simple nursery-grade classification). These learnings would help the device predict the motion and in turn perform better interactions. It was difficult to get feedback on the Arduino prototype as people would be always talk more about the wires and all. So, I made form prototypes to help them visualize the device.
Photo of proto Wire balled prototype with an Arduino, proximity sensors, haptic motor and a 6 axis motion sensor.
Photo of mockups Clip it to use it
Photo of mockups A user testing it out on his foot.
Photo of mockups Possible Applications : Foot Presenter
Photo of mockups Possible Applications : Pan Zoom with graphics/GIS/CAD softwares

Evaluation

With the device working consistently, I designed an experiment around the hypothesis :

H0 : There is no effect of foot modality on task performance
H1 : A combined hand + foot modality positively affects task performance compared to only hand with comparable learning effects

Photo of mockups Experiment Design
Photo of mockups Tasks
The experiment design involved two groups (6 each) of users perform same set of tasks (Latin Square balanced) that make them use hand and feet together. One group was trained on a novel hand based input device (Thinkpad Pressure Pad) and another on FAND. The tasks were designed and programmed in Processing. The evaluation process is rather dense and a specialization of my school. It includes protocol design and task balancing methods. They have been skipped in this portfolio post. One can read these in my report (page 49 onwards).

I maintained evaluation sheets for each user for seven days of testing. The format is shown below. I used NASA TLX to measure task loads and other parameters shown below.
Photo of mockups Data sheet

Results

A repeated measure analysis of variance for the time taken to complete tasks yielded significant learnability for FAND ,F(4,20) = 6.04 , p <.05 as well as trackpoint, F(4,20)=5.63,p <.05.
Hence, both devices had similar learnability for the task “Format alike”. This task required users to use hand and foot together. The time taken to perform the tasks was lesser when using FAND with keyboard compared to a combination of trackpoint and keyboard.
Photo of mockups TLX Score for the last task
The qualitative TLX scores seem to imply that task loads were higher for trackpoint compared to FAND. The FAND group experienced increased task load in the second session while it decreased consistently during the sessions. This may be due to the initial novelty effect. The average task load of the last session for FAND group was 3 (SD = 0.9) compared to 5.3 (SD = 2.4) for trackpoint group as shown.
Photo of mockups TLX Scores for each Trackpoint user per session
Photo of mockups TLX Scores for each FAND user per session

Conclusion and Further information

We proposed the design and evaluation of a gesture-based foot interface device. We identified suitable foot gestures for sitting pose by performing a guessability study.
We found that foot based modality seems to be a promising means for performing secondary interactions or sub-tasks. Users were able to perform some tasks faster with hand plus foot modality compared to hand the only modality. We observed that the task load perceived by users was lesser when using a combination of hand and foot modality compared to a hand modality.
Possibilities of using such a device in mobile, Virtual Reality (VR), Walking in place (WIP), and other hand busy domains could also be explored. We hope to work on designing UI elements and on-screen interactions that support the use of combined hand and foot modalities.

A paper on this project was accepted and presented at SeGAH 2018, Austria . It is published in IEEE Explore. If this has piqued your interest, maybe you can read a longer report here.
Made by Rohit using Tachyons, Vuejs, and some "Pipas con café". Check out the Colophon to know more.