Virtual Events on a Cloud with Interactive Broadcast Pipeline

Ruslan Karimov
5 min readSep 1, 2021

The way live events broadcasted today is not the same as it was even a year back. It has evolved and now experiences can be very customized. I decided to share one of the experiences that we have built for a client recently. We had a full setup running on a cloud and had an updated experience of the viewers in real time. Below I’m describing the architecture and how the project was approached.

The requirements was simply put together by the client without any extra unnecessary details. We were asked to setup a public webpage with 5 streams with custom logic attached. Users would land on first page where they can watch live stream for 15 minutes. When stream one is over we would show 4 buttons leading viewers to other 4 streams.

The client also wanted to have dynamic content on a stream where specific chart numbers could be updated during rehearsals and specific intros and videos replaced.

Let me divide the below description into 3 sections since 3 different groups of people (real-time designers, cloud software developers, cloud AV production team) were working on the project and then I will summarize some of the challenges we had in the end, as well as the solutions we had for them, and the decisions we made.

Real-time content production

We use Ventuz for real-time graphics production. I will not dive deep into the software but to summarize the reasons why we use: it is a very fast tool to develop 3D content, it can connect to multiple data sources, it allows us to bring 3D graphics from different applications such as Maya, 3Dmax, and also allow us to work with their textures and apply specific logic in real time.

When it comes to running the show live or prerecorded, Ventuz is flexible enough to enable us to create a control panel, connect it to Streamdeck, and change the content on the go.

In addition to CG flexibility it allows us to bring NDI into Ventuz and since we were running the full event on the cloud we could upstream the green screen studio directly to our 3D stage.

Building a scalable event website

We have build website using ReactJS and NodeJS. Because of that approach, we had a real-time component to the event website. It allowed up to keep users “connected”, so at any point of time we could send them a specific message and update their UI. In our case it was a trigger which was called when the main stage stream was finished. You might ask why not to automate the appearance of the buttons and it would be possible if it was just VOD but in case of the livestream, video players do not know the end of video is as well as linking trigger to specific time not an option since it’s not normally defined up to the exact second when the show is going to start, it’s always changing. Connected to our trigger 4 buttons lead viewers to 4 separate streams.

As a player we used Theo player.

It has multiple advantages like working with different stream formats, customizable, connected analytics, and most important parameter changes to achieve low latency.

Website was delivered using cloud-front.

Cloud front is a CDN (Content Delivery Network) and it’s part of AWS and since all our setup was on AWS it’s easily connected to our production pipeline.

It was hosted on EC2 with load balancers so it auto scales according to traffic.

Event stream as seen by audience.
User Flow of the website

Cloud broadcast production

The stream to the web players was delivered from 5 ec2 instances on AWS.

1 for main stage and 4 for following stages

Each instance had its own content timeline where we could replace specific media files, which helped the client with last minute updates. From ec2 we had vMix streaming RTMP to media live to media package to cloud front to theo player.

Running the event on ec2 was essential for the caliber of client that we were dealing with, as it ensured we have easily accessible redundancy protocols. In addition, as an extra precaution we also had some spare units running in parallel.

Although things are run on a cloud, we did run a physical setup. On premise we could see and control what’s happening on the cloud from a control/command event center.

Challenges and solutions

  1. First challenge that we encountered was a continuous buffering issue for a period of time, on our office network. However that issue presented itself only during testing, and the solution was pretty simple — clearing the cache.
  2. We had to change our approach controlling content due to the fact that this show had a lot of 3D elements popping up on the stage compare to other shows where it’s mostly video content on the screen. Normal approach involves just a playback tool (we use vMix) which streams through network NDI streams and than we show those streams on virtual screen within 3D stage. When 3D elements needs to be shown we use Ventuz Director to to control and trigger them. This approached required 3 people to control the flow. We have changed the flow so one person can have a single place to control everything. It worked for this specific job since most of content was 3D elements on stage.

Tech & software used in a project

Video Player : www.theoplayer.com

Real-time graphics engine : www.ventuz.com

Playback : Vmix : www.vmix.com

Cloud using AWS:

Cloud front, media live, media package, ec2, Load balancers, Route53

Code : React JS

Video protocol : NDI

Software Control : Streamdeck.

--

--