Formation Testing Advisor

A dashboard that offers geographic insights for oil field services.

Overview

Time

Jun 2019 - Aug 2019

Tools

Sketch, Axure, Zeplin

Roles

Product Design Intern

Responsibilities

Conducted user research and participated in contextual inquiry
Refined the current workflow based on research insights
Delivered and iterated user interfaces

Operators perform formation tests to assess and discover the most efficient techniques for maximizing production from oil wells and fields. During the tests, operators deploy distinct examination instruments based on the particular well conditions. Operators investigate responses to changing parameters to inform better short-term or long-term decisions regarding their assets and investment.

The task of selecting an inlet to optimize wireline formation is highly complex, involving numerous underground parameters that vary across wellsites. For this reason, in order to effectively promote our testing services and analytical capabilities, merely sharing a limited selection of testing data with clients would be insufficient.

Rather than displaying product data, our team needed to find a way of showcasing flexible pre-testing services and trial plans to clients at the outset, thereby allowing them to compare and select options best suited to their needs.

Before this project, even though engineers were able to implement the algorithms for data collection and analysis within different types of formation under earth, clients or sales still cannot directly use it for inquiries. So I worked on designing the Testing Advisor to suggest testing tools that are safe, precise, and user-friendly based on corresponding well conditions.

The problem

Too much complex information presented during the testing process, and failure to provide consistent, sortable data records to users

The current formation testing process contains too many channels of dense information without a clear hierarchy for users to understand the essential factors that affect their testing results. In addition, the dashboard doesn't enable users to compare and manage data from testing results.

Design objectives

How might we prioritize presenting the most important data for essential use scenarios and add more flexibility for users to retrieve past data records?

The impact

🙅 Avoided Unnecessary Duplicated Interaction

👬 Integrated Products on DELFI Platform

⏳ Controlled Development Time

The process

Let's see how I get here!

Research

Contextual Inquiry
Goal 1: Understand user goals and the working process
Current Situation
What matters in the process?
Journey map
Goal 2: Define challenges and explore opportunities

Based on the journey map of field engineers' current workflow, I learned:

Journey Map of Current Testing Flow

Obstacles

For Stakeholders

Management of operation options, formation properties are not editable

Massive data and information display, users get lost easily

For Workflow

Formation testing contains too many parameters that can lead to countless solutions

Information Architecture
Goal 3: Reframe the existing workflow to make it more efficient

Before

Concerns:

1⃣ Several repeated features appear in different sections, and users are confused about what effects the controls can produce.

2⃣ Users do not know what they can search in history records. Property keywords? Or specific numeric results?

After

Improvements:

1⃣ Determined which functions were necessary to reproduce in different sections on the page, and differentiated what users could adjust among various functions during different testing stages. Simplifying the structure could bring them a sense of control.

2⃣ Easier tracking allowing developers to target where there are opportunities for product improvement when users encounter problems in usage.

Design

From Workflow to UI Flow
What users will see on screens vs. What users will do

My primary question in this stage was:

How can I ensure that tasks are displayed through screens in a more intuitive and communicative way?

I found it helpful to map out "what users see" and "what users will do" to anticipate gaps in user understanding, reactions, and intentions in each step of formation testing, allowing me to measure different actions and triggers between users and the screen.

Prototype
Challenge 1: Define data display for main testing process

The dashboard contains a holistic view of data that will be used for testing, however, users are also easy to get lost right after then log in to the main page.

My goal here was to ensure the dashboard presents the most necessary data to reduce confusion among different formation properties.

Iteration 1

✅Incorporated the testing data monitor dashboard with testing tool selection options to interpret results

✅Clearly differentiated and displayed formation and testing tool features

❌Didn't prioritize other sub-layer analysis results

❌Tried to adopt the law of proximity in tool performance section, but failed to maintain consistency :(

Iteration 2

✅Removed the tab bars to avoid unnecessary representations

✅Clarified and reorganized formation and testing tool features

❌Wasn't friendly for personalizing testing tool add-on features

Iteration 3

✅Maintained visual consistency by adopting UI components library

✅Prioritized results and functions across different channels

✅Reduced complexity by facilitating more intuitive interactions

What's improved

View Properties

Add Testing Tools

Challenge 2: Tool performance comparison

I focused on designing a specific section on the dashboard. My goals were to guarantee all comparative properties would be fully displayed on this page, and to allow related functions to work in a compatible way with case data.

Iteration 1

✅Rearranged properties by classifying them into formation properties and corresponding tool performance

❌The sort function of selecting preferred property display order was hidden in properties, with no clear indication of being "clickable"

Iteration 2

✅ Maintained visual consistency of case selection and decreased duplicative interaction by allowing users to choose multiple objects at the same time

✅ Had a clearer indication of sort function

❌ Failed to tell users what options they'd chosen, resulting in users having to compare properties by scrolling back and forth across the display.

Iteration 3

✅ Kept info hierarchy consistency with other sections

✅ Avoided having too many icons listed together, which had made users feel overwhelmed

✅ Provided informative indications after users chose sorting options

What's improved

Sort Data

Next Steps

Critiques+Testing

In order to maintain consistency throughout different work sites and enable Formation Testing Dashboard to continue to be built upon the iteration, I gathered engineers, developers, and designers from other teams to give feedback on the current design solutions.

In addition, I also attended usability meeting to better understand global users behaviors. At this time, I mainly paid attention to: Device Types, Bounce Rate and Button Clicks.

Reflection

1. Don't be too worried if you are new to a particular field. Ask, ask, ask: Never feel ashamed to ask other teams questions; being open about your knowledge vulnerabilities will let your mentor and teammates know what you lack, and enable them to provide you more targeted support.

2. Communication is the key to:

Engage users in the process: work smart and know where to find the resources you want. Sometimes it's hard to reach out to the real users especially working on products with expertise threshold, look into the employee profiles, talk with people you work with, and learn about their background: a back-end engineer or a QA can be your potential users since they may have abundant field work experience!
Know where you're at in the roadmap helps prioritize the tasks in each sprint: once designers figure out the MVP, developing always takes longer than designing the product. Communicate with PO and scrum master, even if it's a big picture of a business goal of this quarter, this will give you a clearer mind when you prioritize the upcoming tasks in next sprint.

3. Physical artifacts can provide a better sense of how things works. Contextual Inquiry was a great method that helped me better understand how tools strings work in the field. In particular, pictures of these tools and diagrams of how they were assembled often came to mind while I was working on design work in the later stages. This helped me integrate my design ideas more cohesively with the tools and technology we worked with.

4. Some reflections on making job map of this case:

Mental Model: Based on the given use-case scenarios that I was provided by the engineers, our goal was to carry out an MVP of the product. By conducting a job map then connect to the UI flows can directly reflect both functional steps and contextual actions taken to achieve user end goals.
Iteration: A job map is more flexible, and can be iterated after revealing more details. In particular, more jobs can be sorted into previously designated categories without disrupting the experience flow.

Thank you for visiting my website.