ATTENTIVE CONCIERGE REDESIGN
Timeline:
April - June 2022
My Role:
Lead Designer and Researcher
Team:
Product Manager, Tech Lead, FE and BE Engineers, Lead Data Analyst, Enablement Associate (Conversational)

Context
In August 2021, Attentive launched Concierge, a service that allows SMS marketers to reinforce their customer service with live text agents. The service is powered by a bespoke web app that enables Attentive's 200+ internal agents to handle requests in the "queue".
Feedback from Q1 2022 revealed that the Concierge queue UI and information architecture were blocking agent productivity and creating a frustrating experience for agents. Specifically, it was cumbersome for agents to toggle between company information and customer information, and the keyboard shortcuts intended to speed up the process were not reliable.

Former Concierge Queue Design (Q1 2022)
THE TASK
I needed to design a solution to streamline workflows in the platform, enabling agents to quickly access the information they need and send messages as quickly as possible.
The solution had to be front-end only and able to be built in one sprint. We also needed to develop a testing plan that could ensure the new design did not disrupt the agent process and get buy-in from the whole agent team before releasing for general access.
How might we optimize the queue design to maximize agent efficiency?
Goal: Increase agent efficiency and capacity
KPI: Average Response Duration (ARD)
Benchmark: 2.5 min
Impact:
18% Reduction in ARD
DISCOVERY
To understand how to optimize the queue, I first needed to completely understand our agents’ environment, processes, and pain points. For an agent handling hundreds of topics across verticals, brands, and subscribers, what information do they need the most?
Hardware Analysis
Goal: Understand Concierge agents' devices and how they prefer to interact with them
Tools: Fullstory, Google Forms
Participants: 34 Concierge Agents
Timeline: 1 day
The queue design needed to fit the agent environment, so I pulled some quick insights on how our agents interact with their hardware. I used Fullstory to do a screen resolution breakdown for our agent team and conducted a survey to understand how agents navigate the queue with their mouse and keyboard.
We found that most agents access Concierge on large HD screens (1920 x 1080), which contextualized agent feedback moving forward and was a helpful reference as I worked with our front-end engineer on column widths later on in this process. We also learned that agents prefer to navigate with their keyboard, but keyboard-accessible actions are not much easier than mouse-only actions. It appeared that we should explore keyboard navigation opportunities in a future project (we did!) but not prioritize it immediately.
Observation & Interviews
Goal: Watch the agent process in real-time and fully understand their points of friction
Tools: Usertesting.com, Zoom
Participants: 10 Concierge Agents
Timeline: 1 week
I created an unmoderated Usertesting.com study to observe 10 agents as they handled real subscriber conversations during a 20-minute portion of their shift. I also conducted 30-minute generative interviews with the same agents. I hoped to understand more about their process to ensure the new queue design prioritized the right information. I hoped to understand:
-
What is the typical user flow for handling a conversation
-
Which workflows in the queue take the most time? Why?
-
Which types of questions take the most time? Why?
-
What about the Concierge queue works well for agents?
-
What challenges do agents face as they navigate the queue?
-
How might pain points be resolved by design? Product features? Training?
My product manager and I worked together to review the recordings, take notes on trends in process and friction, and sort the trends by type (General note, Feature opportunity, UX Gap, Note for ops).
I used the insights from our discovery research to map out a general user flow. As agents handle conversations in the queue, they complete three primary workflows: Intake, Research, and Composition.
Takeaways
1. Off-platform research is our agents’ most challenging and time-consuming process.
Matete Mahao, Concierge Trainee
“Product-specific questions are KILLING US because quite a few of the sites are wonky, too. There are some which are region-locked making navigating them a pain.”
Product inquiries are time-consuming because agents have to harvest info from off-platform sources not designed for them— primarily a brand’s website. A typical conversation takes agents ~1m to handle, but inquiries that require searching for information on a brand’s website take 2-3m on average. We observed that when agents need to research product or brand information, most agents go straight to the company website to search for information, not the template library in the queue. Agents tend to only search templates when they know a relevant template exists.
Agents were in a pickle— a brand’s website is the most reliable research source, but it’s not designed to quickly answer a specific customer question. Templates are designed to quickly answer a specific customer question, but they come with the difficulty of searching the right terms (e.g. discount vs. coupon) and the risk of wasting time looking for a template that doesn’t exist.
We couldn’t eliminate off-platform research with a UI-only solution, and we needed to do further discovery on template underutilization (we did!), but I moved forward with the hypothesis that the optimal queue design would get agents to the most relevant part of a brand’s website as quickly as possible. I needed to validate whether we should prioritize tools with links, primarily the customer profile, or other information.
2. Our agents need constant access to brand notes.
Bruce Gabriel Nieves, Senior Concierge Agent
“We would like to request it to be frozen so when we scroll down on either the Profile or the Templates tab, the notes part would stay in place. The notes are very important so they should be visible at all times.”
This was a high priority— my product manager and I defined a requirement to keep brand notes “sticky” in the new design, so regardless of the final layout, guidelines were always visible. We also specced future work to color code note sections (e.g. blocked words should be in red).
Brand notes are miscellaneous preferences for how agents should interact with their customers (e.g. "Do not recommend products unless requested"). In the MVP design, brand notes would be hidden if an agent navigated to the Templates panel. Agent moving quickly, especially novice agents, occasionally missed important guidelines and were flagged by Concierge clients. Several agents also requested color-coded note sections, so they could find the most relevant information without having to skim everything.
DESIGN
Competitive Analysis
As I iterated on various layouts, I referenced other customer service platforms to see how they displayed conversation and customer data. I kept in mind that most CX platforms are designed for an agent who is an expert on one brand, not hundreds. Most platforms had a 2 or 3-column layout and still required some clicking to show or expand information. I was particularly inspired by Zendesk’s 3-column layout, it had modular elements that could be expanded or collapsed. I also noticed that it was easier to skim previous messages when they were all left-aligned.
Hi-Fi Wireframing
I began to design, exploring solutions for making information more readable and accessible, particularly product information and links. I wireframed different queue layouts, working in hi-fi for speed. I planned to design several layouts to concept test with agents in a survey. I explored different approaches to simplifying the information architecture and make better use of space.

TESTING
Concept Testing
I narrowed all the wireframes down to four primary layouts to concept test with agents. I hoped to understand how much information we can expose before overwhelming our agents and how agents would expect to prioritize sections in the IA. I designed and distributed a survey to quickly collect feedback on the four general layouts. I also had participants rank how helpful certain queue sections typically are for insight on how to further refine the design.
Goal: Identify the most important conversation data and understand how to improve access without information overload
Tool: Google Form
Participants: 51 Concierge Agents
Timeline: 3 days


Takeaways
1. Agents prefer a 3-column layout with scrolling sections.
Agents preferred this layout because no clicking was required to view information. There were a few critiques that the righthand column had a slightly cluttered look and feel, but it was still a clear winner.
Andrija Selakovic, Concierge Agent
“It's my favorite because I see every section at a glance and I don't have to switch to tabs or scroll down that much.”
2. Agents prefer scrolling to clicking.
Agents prefer a layout that shows all information at once, even if it has to be condensed. Scrolling a condensed section is more efficient for them than clicking tabs to view a full section, even if there are shortcuts available for quick tabbing.
3. Agents want the UI to be equally clean and comprehensive.
The 3 column layout is the sweet spot that balances cleanliness and comprehensiveness. In the 2 column layout, the layout most similar to the MVP, information feels hidden to agents. In the 4 column layout, the screen feels too cluttered.
4. Brand notes are agents’ highest priority, only slightly more important than templates.
Only a slight majority of agents reported that notes are more relevant than templates in a conversation, but we could not reserve a full column for both, so one was guaranteed to be less readable than the other. Our discovery had revealed that agents rarely use templates for research (the most time-consuming workflow) and that agents do not scroll to find templates, but there were still risks to making the template library harder to scroll. Agents may be even more reluctant to use templates, spend more time doing manual research off-platform, and cause fewer templates to be created that would avoid the work— a worrisome cycle. Additionally, the 31.1% of agents who use browser search functionality (Ctrl + F) to search templates would have a harder time reviewing search results. We needed more testing to understand the path forward.

REFINEMENT
A/B Testing
Goal: Optimize the 3-column layout.
Tool: A/B test (3 Variants)
Participants: 20% of the Concierge agent team
Timeline: 3 days per variant
KPI: ARD
Benchmark: 2.23 min
I moved forward with the 3-Column layout with scrolling sections, making some tweaks to the UI and working with our front-end engineer to define breakpoints for the column widths. My product manager, agent ops leadership, our FE engineer, and I developed a plan to A/B test the new design solution against the MVP, to ensure the changes improved ARD and did not cause new inefficiencies.
We opted to test 3 solutions against the control so that we could quantitatively confirm which section should afford a full column. We didn’t have the infrastructure to beta test in the agent platform prior to this project, so this was a huge process improvement that ensured successful launches and built trust with our agents.
For agility, our FE engineer built modular sections that he could rearrange quickly, as we monitored ARD in Looker to identify the design that maximized productivity. We also made a feedback survey for the team and participants to gather qualitative feedback throughout the process and a slack channel for bug reporting and general communication.

Takeaways

Prioritizing the Company section led to a 14% reduction in ARD.
The layout designed to make company information the most readable was the most performant and got the most positive feedback from the agents who participated in the test. We were excited to validate that the 3-Column layout was significantly more efficient than the MVP and hear that agents felt more productive and in control as they worked.
Azra Colic Rahmanovic, Concierge Agent
“I love the new layout, everything is in one place, everything is accessible and almost all of the work is done by glancing and the answer is ready to be typed. I really like it.”
Roel Quaas, Concierge QA Manager
“Love how everything is all on one page. Slightly smaller chat box with a smaller view on what items Cxs have seen, but works fine. Love the extra info such as tone and persona used.”
Feedback on the winning layout revealed some low-lift improvements we could make to the winning layout to improve readability, so we updated the following before releasing the new UI for general access:
-
Expanded panels widths, pressure tested for responsiveness
-
Changed recent dates to relative timestamps (e.g. 5m ago, Just now, etc.)
-
Restored the yellow background for notes, as requested by agent trainers
THE IMPACT
18% Reduction in ARD
The final design brought ARD from 2.23 min to 1.82 min in the first month of being live and received overwhelmingly positive agent feedback. We developed a design that gives agents better access to their tools and set them up for success.
Azra Colic Rahmanovic, Concierge Agent
“It is 100% easier than the old one which actually took a lot of screen space with all the boxes around. I LOVE the new layout and everything about it. Makes my life easier. And calmer shifts.”

North Star Vision
I used the findings from our research and testing to develop a north-star vision for the queue. I explored readability improvements like a more scannable conversation thread and usability improvements like URL preview cards.
I hypothesize the most impactful tool for agents would be a search function that could parse a company’s product catalog, FAQ, coupons, and our platform’s templates at once. Ideally, the tool could identify synonyms (e.g. discount vs. coupon) and correct typos as well. A “knowledge base” like this could allow agents to search multiple databases in one go and see all the information available to them without having to take a gamble or duplicate workflows.
While we haven’t realized this vision yet, my product managers and I looked toward the north star as we moved forward and milestoned future improvements to the queue. Some solutions— browsing history and templates panel improvements— were prioritized and built later on.
