From constant listener feedback to continuous content improvement in radio

Hundreds of station managers, content directors, programme directors or show hosts are starting a new radio season these days. It’s the time of the year for repositioning stations, launching new shows, trying new contents, or replacing hosts. For a few weeks, or even months in some cases, there won’t be any possible way of assessing how those adjustments are working. Maybe total freedom during this short time is good for creativity, some will think. Others would prefer not to go through such a long period of uncertainty.

Waiting for the next ratings?

When the next wave of ratings comes with bad news, complaints will begin. Today, I’m not going to list all common criticisms about audience measuring systems. Instead, I will summarise what probably is the essence:

  • Frequency and granularity of both surveys and diaries are insufficient. Insights from these methodologies are not deep enough for certainly conclude what’s performing well or bad. On top of that, they arrive when it’s already too late.
  • Due to their nature based (directly or indirectly) on recall, they measure the strength of a brand, not the engagement to the content.

Focus groups and auditorium tests are research methodologies more specifically aimed at studying content engagement. Neither these solve the issue of insufficient periodicity, because of cost reasons.

Continuous and granular insights

Markets with PPM (Portable People Meter) or equivalent got much closer to listener engagement measurement. By automating the diary collection through audio footprint, PPM measurement is capable of delivering figures daily, with minute-by-minute depth.

Surprisingly, the effects of achieving that continuity in audience insights, broadly demanded by radio makers and broadcasters, are not as beneficial as expected.

Fred Jacobs, extensively experienced radio consultant, explains in the article PPM Turns 10 – Celebration Or Regret? the negative impact overreaction to those “EKG-like lines that dipped for commercials, new music, and DJ talk” had for radio. A friend who was programme director in Los Angeles when PPM was introduced, told me how scary radio programming suddenly became. Professionals were not ready to see audience drops and gains, quarter by quarter everyday. Specially in those formats or stations where loosing a few panelists implied a life-threatening audience loss.

A framework for continuous content improvement in radio

The startup I co-founded, Voizzup, helps radio stations evaluate on-air contents daily through data-analysis. Guess what? We also provide radio professionals with EKG-like lines that decline during commercial beaks or too long DJ talks. The temptation to rush conclusions and promptly act understandably appear, like Fred Jacobs said about PPM, also with data analysis. Approaching continuous insights with self-imposed urgency, no matter where they come from, can cause more damage than benefit. At Voizzup, we have learned that constant listener feedback is not enough alone for bringing continuous content improvement to radio. It requires a framework including these elements:

Technology

Voizzup, collects spontaneous reactions from thousands of listeners during natural listening on the mobile apps of radio stations, non-stop. Millions of events (play, pause, stop, volume change, etc) are crossed minute by minute with the log of contents, extracted from the playout system and the script editor of the station. We deliver the processed results to the show team, turned into comprehensible performance indicators for evaluating on-air contents through an intuitive dashboard daily.

Methodology

Spontaneous and unconscious feedback from listeners is constant. This enables the team to introduce learning loops in their workflows. After every show, they see listeners reactions to the content, minute by minute, contextualised with the audio of the show. Best and worst performing elements of the day are identified and ranked. Instead of removing the “bad” ones or increasing the exposure of the “good” ones in an impulsive way, the show team is able to continue the observation over time in order to find patterns. They can come up with theories after a few days, rather than rushing decisions pushed by a non-existing urgency.

Let’s stop for a second at a simple example. A music programmer or a music director finds out through the dashboard that a specific song is performing worse than average. What should she/he do? Remove the song from the playlist or decrease its rotation because the song doesn’t test well? Wouldn’t that be the type of overreaction we mentioned above? It would also be an oversimplification.

There are multiple factors affecting the performance of a song: the genre, the tempo, the mood, the era, the language, the sequence (position in the clock), the duration, etc. These are only internal variables, there are external as well: seasons, holidays, weather, prime-time TV, events, etc. Same happens with spoken contents, like segments in a morning show. Some internal factors are the topic, the host, the tone, the story-telling, the sound quality of a phone call, etc. You get it, right?

Our approach for continuous content improvement encourages radio professionals to first build assumptions rather than to hurry decisions.

Data-sience

Assumptions will only turn into actual learning when clearly formulated, tested and confirmed. Together with Voizzup, the team of the show defines the tests for confirming or rejecting these assumptions. Beyond the dashboard, our technology and data science skills are then applied to these specific exercises. Depending on the formulation of the hypothesis we’ll choose the most appropriate type of  experiment: A/B test, cohort analysis, K-means clustering, etc.

Mindset

We have talked about constant listeners feedback and experiments to validate or refute hypotheses, as learning cycle. Nothing new, really. By having listeners feedback always in the center, we are continuously corroborating that listeners care for our content. By working in short cycles, we invest less time and efforts. If what we are working on fails, we have learned fast without risking much. If it works, we can progress and improve. This way we are creating a safe environment that helps us loose fear to fail, there’s no room for panicked decisions anymore. Having user feedback in the center at all stages of creation and introducing short learning cycles are two common elements in frameworks for incremental improvement.

Through disciplines like Lean, Agile or Design Thinking, teams of developers, UX designers or product managers in some radio organisations are using similar frameworks for continuous improvement, also based on user-centrism and iterative learning. You can read more about this in my article Agile radio, continuous improvement ready

______________________________________________________________________

If you believe your morning show and the professionals of content in your radio organisation would benefit from a framework for continuous content improvement, please, contact us.

Copyright © 2023 - Voizzup B.V. - team@voizzup.com