The Eurovision Song Contest 2022: A Look at the Most Complicated Live Broadcast Music Show Ever Staged
This article originally appeared in the June 2022 issue of Professional Sound magazine.
By Michael Raine with Luca Giaroli
On May 14th, after months of preparation by both the event’s crew and musical contestants, Ukrainian rap-folk band Kalush Orchestra were crowned the winners of the 2022 Eurovision Song Contest (ESC) during a spectacular finale that drew a worldwide TV audience of more than 10 million.
That just to show up at the grand finale, the members of Kalush Orchestra needed special permission to leave their country, which is heroically fighting off a Russian invasion, speaks to the symbolic and inspiring importance of their victory. “I ask all of you, please help Ukraine, help Mariupol, help Azovstal right now," lead singer Oleh Psiuk said from the stage as they finished their performance of the winning song, “Stefania.”
Part of the responsibility for making this moment possible – to ensure those words, plus every beat, lyric, and nuance was heard clearly cross the world – fell to Italian audio mastermind Luca Giaroli. The business development and product manager at DirectOut, he was tapped by Italy’s national public broadcaster, Radiotelevisione Italiana (RAI), to be the signal distribution manager and designer for Eurovision 2022.
Because the Italian group Måneskin won Eurovision 2021, this year’s semi-finals and finals were held at Turin’s Palasport Olimpico, the largest indoor arena in Italy. And because Italy was playing host, the European Broadcasting Union (EBU) entrusted the country’s broadcaster with the show.
It’s no small thing to be responsible for how the world hears this famed international musical contest. After all, it’s taken place annually since 1958 and earned a rabid and enthusiastic international following (and let’s not forget that Eurovison brought us ABBA). So, simply put, you don’t want to mess with this brand and legacy.
To Professional Sound’s delight and surprise, just two weeks before the finale and with rehearsals starting, Giaroli said he was more than happy to share his broadcast audio plans for year’s event. It’s one that a mutual friend called “arguably the most complicated live broadcast music show ever staged,” and that’s a sentiment Giaroli certainly agreed with.
So, from here, we hand it over to Giaroli to tell the behind-the-scenes story of how Eurovision 2022 was brought to fans around the world and the incredible amount of redundancy built into the system to ensure nothing could go wrong…
PS: I know this will be a long answer, but can you take us through your broadcast audio design for Eurovision?
Luca Giaroli: So, RAI Television hired me as designer and signal distribution manager, asking me to satisfy the high level of resiliency required by the EBU. The most important one is the first level of redundancy, which must be triggered automatically. So, there must be something that is automatically solving the problem without the human being doing anything. That's a kind of a challenging thing, especially because you have to do that for both front-of-house (PA, monitoring, and in-ear monitoring) and broadcast. It's a complicated task. Therefore, because RAI has known me for a long time as we are in contact for several reasons, they gave me the responsibility of the design and the programming.
In order to achieve that challenging goal of having the first level of redundancy be fully automated, I deployed 13 different DirectOut Prodigy.MP [multifunction audio processors,] which offer plenty of tools to guarantee that automatic switching on several aspects. Basically, the Prodigy.MP offers a different level of redundancy for different aspects. For example, clocking — you can have a hierarchy or priority in terms of clocking. So, you can say ‘if everything goes in this direction, I want to get the word clocking from number one…” And then if something goes wrong, I can have word clocking input number two. And then if that goes wrong, I can have the clock extracting from the incoming MADI or from the incoming network.
That is an important thing because the design for the Eurovision Song Contest is completely synchronous.
So, NEP is in charge of the big broadcast vans, which are twinned for redundancy purposes. They are distributing the clock to RAI, which is doing the music mix, and then RAI is forwarding that clock to the stage, and then the stage is getting the clock for everything regarding stage racks, monitor console, and front-of-house consoles. So, if everything goes in the right direction, we are all synchronized to the same clock. But if something goes wrong and for any reason the clock should be disconnected somewhere, each portion should carry on, independently generating their own clock, and therefore, an automatic sample rate conversion, bi-directional, will be immediately inserted to prevent glitches or something like that.
So, that's another of the important aspects of the Prodigy.MP used in the all the crucial points of audio exchange, which are offering several things. For example, for every multi-channel digital IO such as MADI or network (meaning Dante or Ravenna or whatever) is offering an automatic sample rate conversion. So, if everything goes in the right direction, there is no need for sample rate conversion — the signal goes straight and synchronous. But the first time that the received signal is out of sync, even if just one sample is out of sync, the Prodigy.MP will automatically insert the first sample rate conversion to solve the issue. So, that's another important aspect.
Last but not least – of course on top of the fact that we’ve got redundant power supplies and those usual things – is a redundant connection. So, for example, MADI can be used in pairs to have main and redundant. But also, recently, we developed something called Automator, which is a service running inside of the unit that can offer several triggers. And those triggers combined in a Boolean algebra can stroke different actions. So, if I have something lost and the signal is changing (for example, if the fast sample rate converter is required or something), not only is the device reacting automatically, but it's also able to change the settings of external devices. For example, changing the position of a GPO or sending an OSC command or sending an HTTP command or a TCP command. So, the Prodigy.MP is not only reacting to internal changes, but it's also the means of changing other devices in order to overcome the problem of failure, which can happen at any time.
So, combining the word clock recovery, the fast sample rate converter, which is automatically enabled if needed, the Automator, and other redundant features that are built into the Prodigy.MP, I could design a very rock-solid system and the Prodigy.MPs are used in all the crucial points where the signals are either generated or redistributed.
For example, starting from the playback rig, we have three Pro Tools rigs. The first one is having a MADI interface and is considered our main. The second and the third ones are equipped with Dante Virtual Soundcard and are using Dante as a second backup and disaster recovery. Those three Pro Tools rigs are connected to two independent Prodigy.MPs, which are each receiving both main and backup and disaster recovery, and automatically select the one that is currently working. If main is working, fine; otherwise, there's the backup and then the disaster recovery.
And why two Prodigy.MPs? Because, as I said, the EBU requires that not any one device could represent a single point of failure. So, if you have a Produgy.MP, which is used to collect the three Pro Tools rigs, but if it's really the Prodigy.MP that fails, then it doesn't matter how many Pro Tools rigs you have because you will lose the entire thing. So, everything is doubled. Everything is doubled and exchanging signals or changing the role inside of the design depending upon the current situation.
So, the three Pro Tools rigs are feeding two Prodigy.MPs. One of the Prodigy.MPs is considered the main one for the broadcast distribution, and the second device is considered the main one for the live performances, but either of them can act as a redundant or a backup solution for the other. Even if the two Prodigy.MPs end up having different clock references, thanks to the fact that (worst case scenario) the feed that is not in sync will go through a sample rate converter, that is solving the issue.
From those two Prodigy.MPs, a main central patch room is fed for both broadcast and live. Once again, the patch room is divided into different souls (broadcast and live), because at any time the clock can be different and each of the two souls are completely redundant, so the matrixes are doubled. So, there's going to be two DirectOut M1K2 MADI routers, 16-port each, main and backup for broadcast and main and backup for live. And then in the middle of those two matrixes, there are two Prodigy.MPs, which are guaranteeing the exchange of signals between broadcasts and live with redundancy and with sample rate conversion, if needed.
Plus, media conversion, because at any point someone from the patch room can ask me for an AES feed of that signal or an analog feed or say, “I want to give you an analog feed to be distributed towards live and broadcast.” So, the patch room is another central node of the entire architecture, which is made by four MADI routers, because the main portion of the signals are exchanged via MADI, both with the live environment and the broadcast environment. But yet, with the possibility to exchange Dante signals and then the conversion, in that case, is provided by the two Prodigy.MPs in the patch room, or analog and AES. And once again, the two Prodigy.MPs in the patch room – one attached to the live environment one attached to the broadcast environment, and exchanging signals among each other – are representing the patch room solution. Once again, fully redundant, ready to be divided in terms of clock, while perfectly synchronous if everything goes in the right direction.
So, I explained you the roles of two Prodigy.MPs for the playbook rig and two Prodigy.MPs for the patch room. Now, there is another couple Prodigy.MPs connected to the front-of-house console. There are two FOH consoles sitting on two different DiGiCo loops, completely independent. DiGiCo loops are made by two stage boxes each, one monitor console, one RF-check console where the artists are going just before getting to the stage in order to test their mics and in-ear monitors, and then one front-of-house console — everything is doubled. So, imagine two different DiGiCo loops, two stage racks each for 56 channels of input, two monitor consoles, two RF-check consoles, and two front-of-house consoles.
The two FOH consoles are then connected to two Prodigy.MPs. Console One is feeding Prodigy.MP One and Two, and Console Two is feeding Prodigy.MP One and Two, both with MADI and analog backup.
The two Prodigy.MPs in front of house are then creating all the stems for a large number of clusters of L-Acoustics rigs hanging from the ceiling of the arena. The Prodigy.MPs in front of house are detecting which is the running consoles, automatically and seamlessly selecting from the first, or the second in case of problems with the first console, without losing a single sample.
Those two Prodigy.MPs, main and mirror, are taking care about of the alignment and EQ of the PA system. That’s the typical front-of-house management system. Then those two Prodigy.MPs are connected via MADI to two Optocore MADI devices, which are then picking up the signals from front the house and distributing it through a redundant Optocore loop, all the signals towards the L-Acoustics amplifiers. The main distribution is made with the digital Optocore infrastructure, getting on the ceiling AES outputs to feed AES inputs of the several L-Acoustics amplifiers.
Then, from the two Prodigy.MPs in front of house, there is also a completely redundant analog distribution from the two Prodigy.MPs to every single cluster of the L-Acoustics. So, if everything goes really wrong and you lose main and redundant of the Optocore system for distribution, you still have the analog backup plan. And note that we are distributing 28 different feeds for 28 different clusters from the ceilings and all of them are fully backed up on analog. So, the analog is not just a disaster recovery distributing the mono, but they are discrete, fully-programmed, EQ’d, and delayed signals as the copy of the main digital design. Both Prodigy.MPs in FOH are feeding 28 analog channels to Radial switchers, which thanks to GPI, would react to the sudden death of the main Prodigy.MP, immediately selecting the analog out of the second.
So, there's a third Prodigy.MP in front of house used as a watchdog, because one Prodigy.MP can fail and the other one will take over. But that would require a re-patch of the Optocore AES outputs. The same if the Main Optocore should fail. So, the watchdog Prodigy.MP is double checking the availability of the several MADI signals coming back, as a test, from the Optocore devices and depending upon the status of which MADI connection is the one that should be used to distribute the signal, thanks to the Automator, it sends a MIDI program change to the Optocore controller software in order to change the patch.
So, even if one of the Optocore device should fail, or one of the feeds to the Optocore device should fail, the watchdog Prodigy.MP is detecting that situation and automatically triggers a Macro inside of the Optocore system in order to have an automatic switch between the main, the backup, the back to the backup, and the backup to the backup to the backup connection.
So far, I’ve told you about two Prodigy.MPs for the playback rig, two Prodigy.MPs for the patch room, three Prodigy.MPs in front of house, plus a third Prodigy.MP for the watchdog (plus Automix) in front of house. Then we have another six Prodigy.MPs; two of them are for the monitor rig. So, the problem is, in-ear monitor transmitters can only be fed with analog signals. While the digital output of the consoles are the ones that we can backup. So, we have two independent DiGiCo SD7 consoles for monitoring. Each of them is sending 64 output channels for 32 stereo-paired in-ear monitors. And then we need to convert it to analog, but we also have an automatic swap between the main console and the backup console if something goes wrong. So, the two consoles, once again, are feeding two independent Prodigy.MPs. The two Prodigy.MPs are then converting the 32 plus 32 channels using an additional [DirectOut] Andiamo A/D converters. Prodigy.MPs are capable of understanding which is the console that is really running, detecting the pilot tones from channel 64 of each console. The Prodigy.MPs plus the Andiamo are converting the 64 channels to analog. 64+64 channels are then feeding a bunch of Radial switchers, which are triggered by the GP output of the Prodigy.MP, telling the Radial to pick the first batch or the second batch, depending upon the health status of the main Prodigy.MPs, thus surviving a potential disaster and preventing interruption of the in-ear monitor feeds.
So, we have two consoles, two Prodigy.MPs, and two analog feeds — any of them can fail and immediately and automatically the signals are re-routed, granting uninterrupted services towards the in-ear monitors.
Now, we just missed the last four Prodigy.MPs, which are on the broadcast compound. RAI’s music mix is having three different Studer consoles for main, backup, and disaster recovery. All three consoles are getting feeds from the patch room in a redundant way, but only one console is on-air at a time. Therefore, two Prodigy.MPs are getting the feeds from the three consoles in MADI and AES (Why two? Once again, because one device cannot be a single point of failure, so everything is doubled) and decide which console goes “on-air.” If the main is okay, the main console will be forwarded to the NEP OB van. In case of failure of main, the secondary will take over, if also the second one should fail, the third disaster recovery console will be “route on air.”
Then, for extra security, RAI wanted to have other two mirrored devices for the, let's say, cold redundancy. Two Prodigy.MPs are already taking care of the automatic redundancy switching. In case, during the rehearsal or during the day or even during the night, one of the two Prodigy.MPs should fail, the other one takes over, but in order to re-establish the proper level of redundancy, a mirrored unit, with exactly the same settings, is ready to be patched. Once again, the first level of redundancy is automatically guaranteed by the presence of two independent devices. And two cold devices are right there, already programmed and kept mirrored in terms of settings. So, even if you have a last-minute change, that change is forwarded to the cold redundant device, which is ready to be plugged in instead of the other if disasters should happen.
So, in the end, the most important thing in this story is that the entire design is not offering a single point of failure. All the sources are doubled, starting from the microphone preamplifiers, because there's a passive splitter that is feeding to independent stacks of DiGiCo racks. Those DiGiCo racks are getting the feeds from the RF receivers for all the presenter and performer microphones. Then they are digitally split for monitor, front of house, broadcast, and OB vans, since the presenters are going straight to the OB van, which is doing the final mix with video, while the performers plus the playback tracks are going through the RAI music mix, which is then creating the stereo feeds and sending it to the OB van, which is then adding the ambient microphones to create the 5.1 mix, which is the final version of the broadcasted audio.
PS: Wow! One of our mutual colleagues who connected us for this conversation said this is “arguably the most complicated live broadcast music show ever staged and engineered.” Do you agree with that assessment?
Giaroli: I agree. I’ve done a lot of gigantic projects in my career, including Olympic Games, but with this level of complexity in terms of the signal distribution and level of redundancy required, and especially the automatic redundancy for the first level, it is really a challenge. And thanks to the flexibility and the powerful features on-board of the Prodigy.MP, could be easily achieved. I couldn't imagine how I could get to the same level by combining different products. Without the Prodigy.MP as it is now, it probably wouldn't be even possible to reach that level of redundancy. I would have had to use many more products and the more you are combining different products, the harder it is to let them communicate properly in order to have an automatic redundancy switch.
Adding the possibility to control all the devices as if they were just one, it's an incredible advantage of course. The software that guarantees that is Globcon, the global control ecosystem I designed and started developing with my engineering team four years ago. All the DirectOut products deployed at Eurovision 2022 have Globcon plug-ins available.
A couple weeks after our conversation, and just two days after Eurovision 2022 wrapped up, I emailed Giaroli to find out how it went. For fans watching on TV, it was a resounding success with not a single hiccup seen or heard. But given the multiple layers of redundancy built into the system to avoid TV viewers witnessing any such problem, I was curious how smoothly it went behind the scenes.
"As usual, the more backups you have in place, the less likely something will actually happen,” he wrote back. “We didn’t experience any failure or problem at all and with this ESC edition, we have achieved a bunch of world records: the highest level of redundancy ever; first level of redundant actions totally automated; and not a single audio sample lost, nor passed through sample rate converters during the two semifinals and the grand finale. Monitors, FOH, PA, and broadcast worked perfectly in synch for the whole show. In one word: perfection!"
And with that, the bar has been raised for future worldwide live TV events.