
Role: I founded, organized, and managed a group of volunteer UI/UX designers while working alongside them as a UI/UX designer through a beta process. We started as a group of about 10 and ended with a core of six having collaborated consistently week-to-week.
Project Timeline: I designed a five-week structure for the project, using a five-part design thinking model as the foundation for weeks one through four. Weeks one through four, we had stand-ups via Zoom to share and discuss progress and plan for the following week. Week five was designated for presentation and discussion of case studies developed from the project, and week six was added on to allow us to wrap up loose ends and debrief. Our first five weekly meetings were open for others to drop in on as a learning opportunity.
Week 1. Planning and Discovery (Empathize)
Week 2. Discover and Validate (Define)
Week 3. Design (Ideate)
Week 4. Test and Validate (Design) (Prototype/Test)
Week 5. Present/Discuss Case Studies
Week 6. Wrap-up & Debrief
Project Goals:
- Design and validate a minimum viable product solution.
- Build my experience and help other designers build theirs.
- Practice product management.
- Practice collaborating with other designers.
Constraints:
- We constrained the project to five weeks.
- In week four, it became clear that we were either going to stop short of our intended goal, a prototype of a minimum viable product and usability testing, or we needed to extend to a sixth week. We agreed to extend to a sixth week for the core group members to wrap up and debrief.
- We were all volunteers.
- As volunteers, there was a variance in commitment level and availability.
- About 50% of the volunteers stuck with the process from beginning to end.
- The other 50% were folks who dropped in on meetings or who participated at one point, but dropped out before the project was complete.
- As volunteers, there was a variance in commitment level and availability.
- We had differing levels of experience and education.
- This occasionally meant needing to invest more time in teaching/discussion of the process than actually engaging in the process.
- We had zero budget, but I spent $107 out of pocket.
- We shared the screener survey, without offering an incentive, through various avenues, including Facebook, Twitter, LinkedIn, Instagram, Craigslist Community, and reddit, but did not get enough response to find five user interview candidates who met our criteria.
- I researched various user interview recruiting sites, but, in the end, I found the most affordable option was a paid ad in Craigslist gigs (Expense $7), with an incentive of $20 apiece for those who ended up being selected for and completing a user interview (Expense $100).
- The project was free for all participants; however, two of the other volunteers contributed to help defray the expense incurred for user interviews.
Outcomes:
We established the validity of the problem space through user interviews.
We improved the ease of use and intuitiveness of the campsite booking process
Prototype: Figma Lo-Fi Prototype
Tools: Figma, Figjam, Miro, Calendly, Zoom, Google Drive, Google Docs, Google Forms, Google Sheets, Discord, Slack
Getting Started
At our inaugural meeting, I instituted a democratized process to identify the problem space we wanted to work on. I introduced a list of possibilities, then the group brainstormed additional ideas. We narrowed it down to six primary ideas, then voted it down to two options: a mentoring app or website or something in the space of campsite booking and recreation. We decided to forge ahead with the campsite booking/recreation option, the rationale being that, as the majority of the group is located in the Pacific Northwest where camping is a very popular activity, we might have an easier time recruiting research participants. We named our group Happy Campers, no affiliation with any other group or site of the same name.
Research
During our first meeting, we developed a research plan with target dates for deliverables, which included:
- Secondary Research
- Competitive research
- Screener Survey
- User Interview Script
- User Interviews & Notes
Designers were encouraged to volunteer for methodologies and deliverables that they were interested in, whether they were existing strengths or areas to strengthen. We organized the volunteers via sticky notes in a shared Figjam file.
We loosely planned what synthesis activities we might conduct the following week, too.
The secondary research and competitive audits were completed asynchronously by other designers during weeks one and two. I have omitted details and images of those deliverables from the case study as I did not have a direct hand in their creation.
During the week, I developed an initial draft of the screener survey to recruit user interview candidates, then refined that with input from two other designers, resulting in a seven-question survey in Google Forms.
We identified that our target users were:
- 18 or older
- Lived in the Pacific Northwest
- Had searched for/booked campsites multiple times a year
- Were anywhere from very dissatisfied to satisfied with their previous experiences of searching for and/or booking campsites
- Camped multiple times a year
- Interested in being interviewed
I and two other designers developed a user interview script of 13 questions, beginning with a couple to help get the candidate talking and comfortable, then narrowing in on their experience of campsite booking/searching and their priorities when engaged in the process.
Uphill Battle (Still Research)
During our second meeting, we presented what we had accomplished in week one, including discussing the challenges we had with recruiting participants for user interviews. At this point, we were forced to extend the research phase by another week.
When we still didn’t have enough user interview candidates within the following two days, after researching possible avenues for recruitment beyond the things we’d already tried, I posted an ad to Craigslist Gigs, in the Portland Metro area, with an incentive for those who were selected for and completed user interviews, and that increased our response exponentially, providing more than enough qualified candidates to select from for our target of five user interviews. We sent Calendly links with our availability to our selected candidates, including at that time a consent form to be returned prior to the interview.
We conducted a total of five remote user interviews via Zoom, recorded with the consent of the participants. I conducted four of the interviews and another designer conducted one of the interviews. Other designers served as note-takers either during the interviews or from the recordings afterwards.
Getting Somewhere (Synthesis)
During week three’s meeting, we presented brief recountings of the user interviews, identifying some of the key findings and quotes. We discussed what synthesis activities we still felt were important and engaged in real-time affinity mapping using Miro. We did a couple of different sorts and considered the final sort our base for user stories.
Those who were able to meet during the week worked on user stories and problem statements.
Between our week three and week four meetings, I designed a lo-fi wireframe of what a home screen might look like for our app. Another designer designed a site map of what they envisioned.
Humble Beginnings
During our week four meeting, we presented both the site map and lo-fi wireframe, and then discussed how to proceed with designing our solution. The designer of the site map volunteered to iterate on the site map and home screen, to marry the ideas. Five other designers, myself included, volunteered for various aspects of the solution, largely around a particular screen of the experience.
Closing Time
Week five was divided between presenting an early working version of a case study based on the project and discussing our progress on designs, as well as planning our final week of work for the project.
The final week, I cleaned up and unified our designs into a cohesive product, then created a prototype in Figma. I developed a short usability test script and conducted three usability tests.
I prepared a short Usability Report based on task success/failure.
Happy Campers Usability Report
3/2/22
Methodology:
Usability testing included three participants completing remote moderated tests via Zoom. There were two female participants and one male participant. All three were Android users. Two are retired. One is in his 30’s.
Task | Success | Partial Success | Failure |
Search by date or location | 3 | 0 | 0 |
View Campground Details | 1 | 2 | 0 |
Go to Home | 3 | 0 | 0 |
Last-minute availability search | 0 | 0 | 3 |
Cancel reservation | 3 | 0 | 0 |
Current location search function | 1 | 0 | 2 |
Partial Success
View Campground Details: The users who struggled with this task were unclear how to get to the details from the search results screen in map view.
Failure
Last-minute availability search: None of the three users successfully located or used the last-minute availability feature.
Current location search function: The two older users were unsuccessful in locating/using the icon for current location.
The usability report shows a fair amount of success, but also some concerning partial successes and failures. If we were to continue working on the project at this point, I would identify our immediate process and priorities as:
- Conduct at least two additional usability tests to gather more data.
- Iterate on the last-minute availability search.
- Iterate on the current location search function.
- Iterate on the campsite details screen.
- Conduct another round of usability tests.
Lessons Learned
I did competitor research of my own later in the project and discovered an app called The Dyrt. Had I discovered that early on in the process, I would have proposed that we conduct an unsolicited redesign of The Dyrt app for our project, as I think that would’ve given us a more concrete direction and allowed us to narrow our focus.
It would have been helpful to gauge each participant’s level of experience and knowledge of UX going in, so that I could more effectively pair learners earlier in their UX journeys with more experienced folks.
Recruitment for user interviews and timing of user interviews was challenging. The democratized process at the front-end of the project was important and helped everyone buy in from the beginning. However, it also created a time crunch, which led to spending money on recruitment and conducting user interviews in less than ideal settings. So, in future iterations of the process, I might consider selecting a project and doing some initial recruitment for user interviews prior to establishing the group of volunteer participants for the project.
Participants in the beta provided valuable feedback. The resounding theme was that the group and our meetings provided a safe space to ask questions and learn. The participants who contributed from beginning to end all said they would be interested to participate in another iteration of the process in the future.