In September of 2023, Whycatcher was invited to join Strat7’s Brilliantly Connected Conference to orchestrate a day-long conference-wide series of WhatsApp tasks in the form of question sections. The questions were designed to be interacted with at the close of each workshop, and the conference would finish with a presentation using AI analysis of the data gathered throughout the day.
This conference posed several novel challenges for the Whycatcher team.
- Scheduling: Each workshop would end with a unique set of questions. Respondents should not start any section of questions until the workshop had finished.
- Discussion groups: In certain workshops, the attendees would break out into discussion groups. Each group would respond to the questions collectively, so only one person from each group should need to respond. Should one person in any discussion groups disagree with the rest of their group, they should be able to respond to the questions independently.
- AI and the follow-up question: Whycatcher should present some portion of the findings from the day using AI analysis. This portion should include a quantitative slide of a multi-choice question, and a follow-up qualitative analysis of respondent’s reasonings for choosing the option with the highest score.
- Word count: As a game, attendees should be able to guess the total word count at the end of the day, and the winner should be announced at the close of the conference.
In addition to these, Whycatcher had the more typical challenges of segmenting, question optimising, and monitoring participant responses. We had never come across such a complex puzzle – and we were eager to smash it.
The first piece to tackle was the question scheduling. Each workshop was set to begin and end at designated times – but, as any conference attendee can tell you, that doesn’t always mean they do. Conference speakers can often run under or over by a couple minutes. Whycatcher’s pre-scheduled questions, however, will always go out on time. As we wanted to make sure Whycatcher worked flexibly to the conference’s schedule (and not the other way around!), we decided pre-scheduled tasks wouldn’t work.
Instead, we added extra instruction texts at the beginning and ending of each section to signpost exactly where in the conference day the respondents were. Each new task began with a title of the section, and the number of questions in that task. At the completion of a section, the respondent would receive bold, capitalised text telling them to “STOP” and await further instructions from speakers. Additionally, we asked speakers to include a slide at the end of their section signifying to respondents that they should begin answering the next set of questions.
With the question of scheduling resolved, we turned to the problem of the discussion groups. This proved a much easier conundrum. First, we set the entire tasks’ questions as optional, so respondents could skip past – or answer – any questions they needed to. Next, we added instruction text at the beginning of each discussion group task briefly laying out the parameters and informing participants of their collective responsibility and individual options. This allowed participants to respond only to the questions they felt were relevant and skip past the ones that would make their group responses superfluous.
AI and the follow-up question
AI analysis proved a slightly trickier feat. While the Whycatcher team had been extremely excited to begin playing around with our very own Whycatcher AI in-the-moment analysis, we were still learning the best approach to question phrasing and data collection. The specific questions we were asked to analyse posed an interesting problem.
Most questions in a typical task act as a closed system. A question like “what is your favourite dog?” is such an example; so long as the reader understands what a dog is, they will be able to understand the answer. The question, in other words, contains all the relevant information to interpret the response. This is not the case with all task questions. A follow-up question, for example, asking “why is that your favourite dog,” requires the reader to already know what the participant’s favourite dog is to be able to fully understand the response. For most people, this is hardly an issue – we know from context to read the response to the first question in order to interpret the second question. For our AI, however, things are not so simple. The Whycatcher AI interprets each question as a closed system and does not know to reference a previous response in its analysis. In short, for AI, follow-ups are quite the problem.
We were asked to supply quantitative analysis to a multi-choice question, and then a qualitative analysis to a follow-up question asking the participant to elaborate on their previous response. Further, we needed to use AI to produce such an answer within an hour of receiving the data. We wanted to make sure the information we were getting out of the AI was not only factual, but entirely relevant to the question at hand, and recognised the context of our participants’ responses.
We decided the best course of action would be to have respondents repeat their quantitative answer within the response text of their follow-up. We added a short instruction at the bottom of the follow-up question giving an example of how such an answer would incorporate this text. Our respondents did a fantastic job of following our example, which allowed us to unpick our findings from the AI quickly and effectively.
Not much of a problem! The Whycatcher’s nifty coding team got right on it and added in an up-to-date word count function on our task dashboard page. By the end of the day, we’d totalled over 12,000 words – and awarded a funky prize!
On the day of the conference, our team went into full swing. Throughout the day we monitored participant response rates, times allotted to question answering, and ensured the AI was functioning as it should. At the close of the conference, we were able to present quantitative and AI analysis of our two key questions to the attendees. Further, we’d managed to get our AI to mock up a hip-hop rap about the key priority area and sing it to us.
One of the many great things about getting to take part in this conference was that it allowed us to reach out to our participants for feedback and help us continue to build a better and more user-friendly system. We reached out to friends and colleagues from the conference to ask their favourite – and least favourite – parts of using Whycatcher. Here’s some of what we heard:
Generally very easy to use and worked well. The AI summary was produced very quickly and was interesting/useful.
I think it would’ve been good if I could’ve skipped past blocks of questions (e.g. when someone else entered the feedback for my team). I ended up way behind on the questions towards the end and ran out of time to compete everything!
It was quick and easy to do. Didn’t have to “login” or get timed out of the exercise at any point.
And one from a conference head:
Whycatcher allowed us to turn what would normally be a learning experience for our attendees into a learning opportunity for our senior conference team. Throughout the day, participants found it easy to respond to each speaker’s content via WhatsApp, providing us with rich in-the-moment commentary. Presenting the key findings back to participants on the same day had a really big impact.
Whycatcher is keen to find new ways to grow and develop our platform. We had so much fun getting to stretch our digital legs at the Brilliantly Connected Conference and are so excited to find new and unique ways to challenge our tools and do some good hard research.