Migrating a Livestreaming iOS App from Agora to Amazon IVS


We talk to a lot of customers that want to add livestreaming capability to the existing user experience they deliver.

We talk to a lot of OTHER customers that are already livestreaming but are in search of something that is more customizable and easier to operate.

There are several vendors out there that offer livestreaming SDKs for real-time latency, such as Agora and Mux, and AWS offers Amazon IVS, which is a fully managed livestreaming solution with newly released real-time capabilities detailed in this excellent blog post. In their words, it “handles the ingestion, transcoding, packaging, and delivery of your live content, using the same battle-tested technology that powers Twitch.”

Basically, IVS does the really hard parts of interactive livestreaming and exposes the functionality to developers through SDKs.

IVS, as a first party AWS service, also has interoperability and easier integration with other AWS services.

We’ve helped quite a few customers get from third-party video infrastructure to AWS video infrastructure, and below is an outline of how we recently migrated a native iOS app from the Agora Interactive Livestreaming SDK to the Amazon IVS SDKs. The code snippets and screenshots you'll see are from example code for ease of discussion and customer anonymity.


A lot of conceptual similarity exists across livestreaming SDKs, and unsurprisingly, the core concepts are similar between the Agora and IVS SDKs. The most zoomed out workflow is:

  1. Establish a stage.

  2. One or more hosts (on stage) broadcast to the channel.

  3. Viewer devices display the channel content.

In addition to the livestreaming capability, there are a few other necessary components to consider both within the application and adjacent to the application:

  • Custom User Interface that knows how to receive and display the content. In this case, the front-end was built with SwiftUI.

  • Other back-end AWS services that the use case needs. In this case, the customer was not already using AWS, so it was necessary to set everything from Cognito user pools to data stores in DynamoDB and Neptune. Some fairly light processing with ECS on Fargate was also established for various business-related background tasks.

  • Loading the SDK into the application

  • Granting hardware permissions to the application

  • Nominating a delegate to handle state change notifications in the application


Here’s where the two approaches diverge slightly:

Agora has a single SDK, while IVS delineates its broadcast SDK from its player SDK.

IVS gives you control of the guest host functionality with the Stages feature. This takes a bit more work to set up, but we find the ability to customize the user experience makes the juice worth the squeeze.

When you look at the simplified steps next to each other, you can see they are very similar with a few small, conceptual differences, which makes life easier while migrating livestreaming to Amazon IVS.


Create Channel


In Agora, users join channels as either the .broadcaster or .audience role with a token and channel name.

Join a channel: Call methods to create and join a channel; apps that pass the same channel name join the same channel.
let result = agoraEngine.joinChannel(
  byToken: token, channelId: channelName, uid: 0, mediaOptions: option,
  joinSuccess: { (channel, uid, elapsed) in } 


In IVS, channels are easily created with the console or CLI, but most customers prefer the programmatic route, especially at scale.

The syntax of the request is laid out below specifically for Boto3, and the deeper details for each option are located here.

This returns useful info for later like ingestEndpoint and streamKey.

Prepare Stages


The multi-host functionality in Agora does not require the configuration of a stage first; the second through Nth hosts simply join the channel and begin publishing to the channel.


The Stages functionality in IVS allows for multi-host streams and is thus not necessary for single host streams.

Main Components

There are three functional components that create this multi-host functionality: stage, strategy, and renderer.

The stage is the main concept, representing the audio/video discussion between hosts that will get broadcasted to the channel.

The IVSStage class is the main point of interaction between the host application and the SDK. The class represents the stage itself and is used to join and leave the stage. Creating or joining a stage requires a valid, unexpired token string from the control plane (represented as token). Joining and leaving a stage are simple.

The strategy is how the host application communicates the desired state of the stage to the SDK.

The renderer communicates the current state of the stage to the host application.

Publish a Media Stream

Local devices such as built-in microphones and cameras are discovered via IVSDeviceDiscovery

IVSDeviceDiscovery handles the discovery of hardware resources such as built-microphones and both front and back cameras and publishing media streams for each. The streams are returned as IVSLocalStageStream objects so they can be published by the SDK. Below is an example that shows how to publish media streams for the front-facing camera and the default microphone.

Display and Remove Participants

Use the IVSDevice object to get audio-level stats about a participant, and the didRemoveStreams function for signaling to the host application that it should re-order view priority.

Mute and Unmute Media Streams

The setMuted function for IVSLocalStageStream objects controls if a stream is muted. and changes to mute state are easily updated in the UI using the isMuted property of IVSStageStream.

Create a Stage Configuration

Don't confuse the stage configuration in IVSLocalStageStreamVideoConfiguration with the Stage Mixer that controls UI layout; more below on the Stage Mixer. The stage configuration lets you establish minimum and maximum bitrates, canvas size, and target framerate.

Get WebRTC Statistics

Obtaining WebRTC statistics for both the publishing or subscribing streams is achievable by using requestRTCStats on IVSStageStream.

Get Participant Attributes

Attributes that are specified in the endpoint request CreateParticipantToken are always viewable in the IVSParticipantInfo properties.

func stage(_ stage: IVSStage, participantDidJoin participant:
IVSParticipantInfo) {
    print("ID: \(participant.participantId)")
    for attribute in participant.attributes {
        print("attribute: \(attribute.key)=\(attribute.value)")

Continue session in background

It is possible for the participant to continue hearing remote audio while part of the stage when the app enters the background, though continuing to publish audio and video while the app is in background is not possible.

Stage Mixer


Since Agora handles the stage functionality behind the scenes, the ability to mix together audio feeds into a custom canvas is not exposed.


IVS on the other hand provides the ability to customize the layout or "canvas" of displayed feeds through the mixer. To access the mixer, we call IVSBroadcastSession.mixer.

The Mixer is how you design the layout the viewer sees. As you can see above, the canvas is the full view which you carve into slots.

Canvas is The display extent of the video defined in your BroadcastSession configuration. The canvas is equal in size to your video settings and runs at the same frame rate as specified in your configuration. Canvas properties are set based on the BroadcastConfiguration you provide when creating the BroadcastSession

You arrange slots by specifying sizes and origin points in relation to the canvas.

The slot configuration for this example layout is below.

let config = IVSBroadcastConfiguration()
try 1280, height: 720))
try = true

// Bottom Left
var cameraSlot = IVSMixerSlotConfiguration()
cameraSlot.size = CGSize(width: 320, height: 180) 
cameraSlot.position = CGPoint(x: 20, y: 1280 - 200)
cameraSlot.preferredVideoInput = .camera
cameraSlot.preferredAudioInput = .microphone
cameraSlot.matchCanvasAspectMode = false
cameraSlot.zIndex = 2
try cameraSlot.setName("camera")

// Top Right
var streamSlot = IVSMixerSlotConfiguration()
streamSlot.size = CGSize(width: 640, height: 320) 
streamSlot.position = CGPoint(x: 1280 - 660, y: 20)
streamSlot.preferredVideoInput = .userImage
streamSlot.preferredAudioInput = .userAudio
streamSlot.matchCanvasAspectMode = false
streamSlot.zIndex = 1
try streamSlot.setName("stream")

// Bottom Right
var logoSlot = IVSMixerSlotConfiguration()
logoSlot.size = CGSize(width: 320, height: 180) 
logoSlot.position = CGPoint(x: 1280 - 340, y: 720 - 200)
logoSlot.preferredVideoInput = .userImage
logoSlot.preferredAudioInput = .unknown
logoSlot.matchCanvasAspectMode = false
logoSlot.zIndex = 3
try logoSlot.setTransparency(0.7)
try logoSlot.setName("logo")

config.mixer.slots = [ cameraSlot, streamSlot, logoSlot ]

Certain slot properties can be animated, such as fillColor, gain, position, size, transparency, and Zindex.

Start the engine


In Agora, the engine gets initialized when the user opens the app, and video gets enabled when joining a channel.

Initiate the Video SDK engine:

agoraEngine = AgoraRtcEngineKit.sharedEngine

Start video in the engine:



In IVS, this is the simple way to initialize the broadcast interface:

let broadcastSession = try IVSBroadcastSession(
    configuration: IVSPresets.configurations().standardLandscape(),
    descriptors: IVSPresets.devices().frontCamera(),
    delegate: self)

Enable local video / video preview


// If the session was just created, execute the following 
// code in the callback of IVSBroadcastSession.awaitDeviceChanges 
// to ensure all devices have been attached.
if let devicePreview = try broadcastSession.listAttachedDevices()
   .compactMap({ $0 as? IVSImageDevice })


func setupLocalVideo() {
    // Enable the video module
    // Start the local video preview
    let videoCanvas = AgoraRtcVideoCanvas()
    videoCanvas.uid = 0
    videoCanvas.renderMode = .hidden
    videoCanvas.view = localView
    // Set the local video view

Start a Broadcast


Set the client role as .broadcaster


Start local video


Join the channel



In IVS, you pass the ingest endpoint and stream key (which you should treat as a secret) that were provided when you created the IVS channel with the AWS CLI, console, or API.

The hostname that you receive in the ingestEndpoint response field of the GetChannel endpoint needs to have rtmps:// prepended and /app appended. The complete URL should be in this format: rtmps://{{ ingestEndpoint }}/app.
try broadcastSession.start(with: IVS_RTMPS_URL, streamKey: IVS_STREAMKEY)

Stop a Broadcast


override func viewDidDisappear(_ animated: Bool) {
	leaveChannel() .userInitiated).async {AgoraRtcEngineKit.destroy()}



App Backgrounding

iOS apps are not permitted to use cameras in the background, partially because of limited hardware encoding resources, so the IVS broadcast SDK adjusts accordingly:

Because of this, the broadcast SDK [when in the background] automatically terminates its session and sets its isReady property to false. When your application is about to enter the foreground again, the broadcast SDK reattaches all the devices to their original IVSMixerSlotConfiguration entries. The broadcast SDK does this by responding to UIApplication.didEnterBackgroundNotification and UIApplication.willEnterForegroundNotification.


Moving the iOS application from the Agora Interactive Livestreaming SDK to the Amazon IVS SDKs was pretty straightforward. The stages feature and the mixer functionality in IVS take a little figuring out but provide powerful customization of the viewer experience. The biggest learning curve using IVS for many customers coming from the media side will be expertise in native mobile development and experience building on AWS.

Get in touch at for help designing, building, and running solutions that leverage IVS!





Share this post