LiteAVSDK
Tencent Cloud TRTC SDK, is a high availability components serving tens of thousands of enterprise customers, which is committed to helping you to minimize your research and development costs.
TRTCCloudDelegate

Data Structures

protocol  <TRTCCloudDelegate>
 
protocol  <TRTCVideoRenderDelegate>
 
protocol  <TRTCVideoFrameDelegate>
 
protocol  <TRTCAudioFrameDelegate>
 
protocol  <TRTCLogDelegate>
 

Detailed Description

Tencent Cloud TRTC Event Notification Interface.

Module: TRTCCloudDelegate @ TXLiteAVSDK Function: event callback APIs for TRTC’s video call feature


Data Structure Documentation

◆ TRTCCloudDelegate-p

protocol TRTCCloudDelegate-p
+ Inheritance diagram for <TRTCCloudDelegate>:

Error and warning events

(void) - onError:errMsg:extInfo:
 
(void) - onWarning:warningMsg:extInfo:
 

Room event callback

(void) - onEnterRoom:
 
(void) - onExitRoom:
 
(void) - onSwitchRole:errMsg:
 
(void) - onSwitchRoom:errMsg:
 
(void) - onConnectOtherRoom:errCode:errMsg:
 
(void) - onDisconnectOtherRoom:errMsg:
 

User event callback

(void) - onRemoteUserEnterRoom:
 
(void) - onRemoteUserLeaveRoom:reason:
 
(void) - onUserVideoAvailable:available:
 
(void) - onUserSubStreamAvailable:available:
 
(void) - onUserAudioAvailable:available:
 
(void) - onFirstVideoFrame:streamType:width:height:
 
(void) - onFirstAudioFrame:
 
(void) - onSendFirstLocalVideoFrame:
 
(void) - onSendFirstLocalAudioFrame
 
(void) - onRemoteVideoStatusUpdated:streamType:streamStatus:reason:extrainfo:
 
(void) - onUserVideoSizeChanged:streamType:newWidth:newHeight:
 

Callback of statistics on network and technical metrics

(void) - onNetworkQuality:remoteQuality:
 
(void) - onStatistics:
 
(void) - onSpeedTestResult:
 

Callback of connection to the cloud

(void) - onConnectionLost
 
(void) - onTryToReconnect
 
(void) - onConnectionRecovery
 

Callback of hardware events

(void) - onCameraDidReady
 
(void) - onMicDidReady
 
(void) - onAudioRouteChanged:fromRoute:
 
(void) - onUserVoiceVolume:totalVolume:
 
(void) - onDevice:type:stateChanged:
 
(void) - onAudioDeviceCaptureVolumeChanged:muted:
 
(void) - onAudioDevicePlayoutVolumeChanged:muted:
 
(void) - onSystemAudioLoopbackError:
 

Callback of the receipt of a custom message

(void) - onRecvCustomCmdMsgUserId:cmdID:seq:message:
 
(void) - onMissCustomCmdMsgUserId:cmdID:errCode:missed:
 
(void) - onRecvSEIMsg:message:
 

CDN event callback

(void) - onStartPublishing:errMsg:
 
(void) - onStopPublishing:errMsg:
 
(void) - onStartPublishCDNStream:errMsg:
 
(void) - onStopPublishCDNStream:errMsg:
 
(void) - onSetMixTranscodingConfig:errMsg:
 

Screen sharing event callback

(void) - onScreenCaptureStarted
 
(void) - onScreenCapturePaused:
 
(void) - onScreenCaptureResumed:
 
(void) - onScreenCaptureStoped:
 

Callback of local recording and screenshot events

(void) - onLocalRecordBegin:storagePath:
 
(void) - onLocalRecording:storagePath:
 
(void) - onLocalRecordComplete:storagePath:
 

Disused callbacks (please use the new ones)

(void) - onUserEnter:
 
(void) - onUserExit:reason:
 
(void) - onAudioEffectFinished:code:
 

Method Documentation

◆ onAudioDeviceCaptureVolumeChanged:muted:()

- (void) onAudioDeviceCaptureVolumeChanged: (NSInteger)  volume
muted: (BOOL)  muted 

The capturing volume of the mic changed

On desktop OS such as macOS and Windows, users can set the capturing volume of the mic in the audio control panel. The higher volume a user sets, the higher the volume of raw audio captured by the mic. On some keyboards and laptops, users can also mute the mic by pressing a key (whose icon is a crossed out mic).

When users set the mic capturing volume via the UI or a keyboard shortcut, the SDK will return this callback.

Attention
You need to call enableAudioVolumeEvaluation and set the callback interval (interval > 0) to enable the callback. To disable the callback, set interval to 0.
Parameters
volumeSystem audio capturing volume, which users can set in the audio control panel. Value range: 0-100
mutedWhether the mic is muted. YES: muted; NO: unmuted

◆ onAudioDevicePlayoutVolumeChanged:muted:()

- (void) onAudioDevicePlayoutVolumeChanged: (NSInteger)  volume
muted: (BOOL)  muted 

The playback volume changed

On desktop OS such as macOS and Windows, users can set the system’s playback volume in the audio control panel. On some keyboards and laptops, users can also mute the speaker by pressing a key (whose icon is a crossed out speaker).

When users set the system’s playback volume via the UI or a keyboard shortcut, the SDK will return this callback.

Attention
You need to call enableAudioVolumeEvaluation and set the callback interval (interval > 0) to enable the callback. To disable the callback, set interval to 0.
Parameters
volumeThe system playback volume, which users can set in the audio control panel. Value range: 0-100
mutedWhether the speaker is muted. YES: muted; NO: unmuted

◆ onAudioEffectFinished:code:()

- (void) onAudioEffectFinished: (int)  effectId
code: (int)  code 

Audio effects ended (disused)

Deprecated:
This callback is not recommended in the new version. Please use ITXAudioEffectManager instead. Audio effects and background music can be started using the same API (startPlayMusic) now instead of separate ones.

◆ onAudioRouteChanged:fromRoute:()

- (void) onAudioRouteChanged: (TRTCAudioRoute route
fromRoute: (TRTCAudioRoute fromRoute 

The audio route changed (for mobile devices only)

Audio route is the route (speaker or receiver) through which audio is played.

  • When audio is played through the receiver, the volume is relatively low, and the sound can be heard only when the phone is put near the ear. This mode has a high level of privacy and is suitable for answering calls.
  • When audio is played through the speaker, the volume is relatively high, and there is no need to put the phone near the ear. This mode enables the "hands-free" feature.
Parameters
routeAudio route, i.e., the route (speaker or receiver) through which audio is played
fromRouteThe audio route used before the change

◆ onCameraDidReady()

- (void) onCameraDidReady

The camera is ready

After you call startLocalPreivew, the SDK will try to start the camera and return this callback if the camera is started. If it fails to start the camera, it’s probably because the application does not have access to the camera or the camera is being used. You can capture the onError callback to learn about the exception and let users know via UI messages.

◆ onConnectionLost()

- (void) onConnectionLost

The SDK was disconnected from the cloud

The SDK returns this callback when it is disconnected from the cloud, which may be caused by network unavailability or change of network, for example, when the user walks into an elevator. After returning this callback, the SDK will attempt to reconnect to the cloud, and will return the onTryToReconnect callback. When it is reconnected, it will return the onConnectionRecovery callback. In other words, the SDK proceeds from one event to the next in the following order:

        [onConnectionLost] =====> [onTryToReconnect] =====> [onConnectionRecovery]
              /|\                                                     |
               |------------------------------------------------------|

◆ onConnectionRecovery()

- (void) onConnectionRecovery

The SDK is reconnected to the cloud

When the SDK is disconnected from the cloud, it returns the onConnectionLost callback. It then attempts to reconnect and returns the onTryToReconnect callback. After it is reconnected, it returns this callback ({}).

◆ onConnectOtherRoom:errCode:errMsg:()

- (void) onConnectOtherRoom: (NSString *)  userId
errCode: (TXLiteAVError)  errCode
errMsg: (nullable NSString *)  errMsg 

Result of requesting cross-room call

You can call the connectOtherRoom() API in TRTCCloud to establish a video call with the anchor of another room. This is the “anchor competition” feature. The caller will receive the onConnectOtherRoom() callback, which can be used to determine whether the cross-room call is successful. If it is successful, all users in either room will receive the onUserVideoAvailable() callback from the anchor of the other room.

Parameters
userIdThe user ID of the anchor (in another room) to be called
errCodeError code. ERR_NULL indicates that cross-room connection is established successfully. For more information, please see Error Codes.
errMsgError message

◆ onDevice:type:stateChanged:()

- (void) onDevice: (NSString *)  deviceId
type: (TRTCMediaDeviceType deviceType
stateChanged: (NSInteger)  state 

The status of a local device changed (for desktop OS only)

The SDK returns this callback when a local device (camera, mic, or speaker) is connected or disconnected.

Parameters
deviceIdDevice ID
deviceTypeDevice type
stateDevice status. 0: disconnected; 1: connected

◆ onDisconnectOtherRoom:errMsg:()

- (void) onDisconnectOtherRoom: (TXLiteAVError)  errCode
errMsg: (nullable NSString *)  errMsg 

Result of ending cross-room call

◆ onEnterRoom:()

- (void) onEnterRoom: (NSInteger)  result

Whether room entry is successful

After calling the enterRoom() API in TRTCCloud to enter a room, you will receive the onEnterRoom(result) callback from TRTCCloudDelegate.

  • If room entry succeeded, result will be a positive number (result > 0), indicating the time in milliseconds (ms) the room entry takes.
  • If room entry failed, result will be a negative number (result < 0), indicating the error code for the failure. For more information on the error codes for room entry failure, see Error Codes.
Attention
  1. In TRTC versions below 6.6, the onEnterRoom(result) callback is returned only if room entry succeeds, and the onError() callback is returned if room entry fails.
  2. In TRTC 6.6 and above, the onEnterRoom(result) callback is returned regardless of whether room entry succeeds or fails, and the onError() callback is also returned if room entry fails.
Parameters
resultIf result is greater than 0, it indicates the time (in ms) the room entry takes; if result is less than 0, it represents the error code for room entry.

◆ onError:errMsg:extInfo:()

- (void) onError: (TXLiteAVError)  errCode
errMsg: (nullable NSString *)  errMsg
extInfo: (nullable NSDictionary *)  extInfo 

Error event callback

Error event, which indicates that the SDK threw an irrecoverable error such as room entry failure or failure to start device For more information, see Error Codes.

Parameters
errCodeError code
errMsgError message
extInfoExtended field. Certain error codes may carry extra information for troubleshooting.

◆ onExitRoom:()

- (void) onExitRoom: (NSInteger)  reason

Room exit

Calling the exitRoom() API in TRTCCloud will trigger the execution of room exit-related logic, such as releasing resources of audio/video devices and codecs. After all resources occupied by the SDK are released, the SDK will return the onExitRoom() callback.

If you need to call enterRoom() again or switch to another audio/video SDK, please wait until you receive the onExitRoom() callback. Otherwise, you may encounter problems such as the camera or mic being occupied.

Parameters
reasonReason for room exit. 0: the user called exitRoom to exit the room; 1: the user was removed from the room by the server; 2: the room was dismissed.

◆ onFirstAudioFrame:()

- (void) onFirstAudioFrame: (NSString *)  userId

The SDK started playing the first audio frame of a remote user

The SDK returns this callback when it plays the first audio frame of a remote user. The callback is not returned for the playing of the first audio frame of the local user.

Parameters
userIdUser ID of the remote user

◆ onFirstVideoFrame:streamType:width:height:()

- (void) onFirstVideoFrame: (NSString *)  userId
streamType: (TRTCVideoStreamType streamType
width: (int)  width
height: (int)  height 

The SDK started rendering the first video frame of the local or a remote user

The SDK returns this event callback when it starts rendering your first video frame or that of a remote user. The userId in the callback can help you determine whether the frame is yours or a remote user’s.

  • If userId is empty, it indicates that the SDK has started rendering your first video frame. The precondition is that you have called startLocalPreview or startScreenCapture.
  • If userId is not empty, it indicates that the SDK has started rendering the first video frame of a remote user. The precondition is that you have called startRemoteView to subscribe to the user’s video.
Attention
  1. The callback of the first local video frame being rendered is triggered only after you call startLocalPreview or startScreenCapture.
  2. The callback of the first video frame of a remote user being rendered is triggered only after you call startRemoteView or startRemoteSubStreamView.
Parameters
userIdThe user ID of the local or a remote user. If it is empty, it indicates that the first local video frame is available; if it is not empty, it indicates that the first video frame of a remote user is available.
streamTypeVideo stream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.
widthVideo width
heightVideo height

◆ onLocalRecordBegin:storagePath:()

- (void) onLocalRecordBegin: (NSInteger)  errCode
storagePath: (NSString *)  storagePath 

Local recording started

When you call startLocalRecording to start local recording, the SDK returns this callback to notify you whether recording is started successfully.

Parameters
errCodeError code. 0: recording started successfully; -1: failed to start recording; -2: incorrect file extension
storagePathStorage path of recording file

◆ onLocalRecordComplete:storagePath:()

- (void) onLocalRecordComplete: (NSInteger)  errCode
storagePath: (NSString *)  storagePath 

Local recording stopped

When you call stopLocalRecording to stop local recording, the SDK returns this callback to notify you of the recording result.

Parameters
errCodeError code. 0: recording succeeded; -1: recording failed; -2: recording was ended due to change of resolution or switch between the landscape and portrait mode.
storagePathStorage path of recording file

◆ onLocalRecording:storagePath:()

- (void) onLocalRecording: (NSInteger)  duration
storagePath: (NSString *)  storagePath 

Local media is being recorded

The SDK returns this callback regularly after local recording is started successfully via the calling of startLocalRecording. You can capture this callback to stay up to date with the status of the recording task. You can set the callback interval when calling startLocalRecording.

Parameters
durationCumulative duration of recording, in milliseconds
storagePathStorage path of recording file

◆ onMicDidReady()

- (void) onMicDidReady

The mic is ready

After you call startLocalAudio, the SDK will try to start the mic and return this callback if the mic is started. If it fails to start the mic, it’s probably because the application does not have access to the mic or the mic is being used. You can capture the onError callback to learn about the exception and let users know via UI messages.

◆ onMissCustomCmdMsgUserId:cmdID:errCode:missed:()

- (void) onMissCustomCmdMsgUserId: (NSString *)  userId
cmdID: (NSInteger)  cmdID
errCode: (NSInteger)  errCode
missed: (NSInteger)  missed 

Loss of custom message

When you use sendCustomCmdMsg to send a custom UDP message, even if you enable reliable transfer (by setting reliable to YES), there is still a chance of message loss. Reliable transfer only helps maintain a low probability of message loss, which meets the reliability requirements in most cases. If the sender sets reliable to YES, the SDK will use this callback to notify the recipient of the number of custom messages lost during a specified time period (usually 5s) in the past.

Attention
The recipient receives this callback only if the sender sets reliable to YES.
Parameters
userIdUser ID
cmdIDCommand ID
errCodeError code
missedNumber of lost messages

◆ onNetworkQuality:remoteQuality:()

- (void) onNetworkQuality: (TRTCQualityInfo *)  localQuality
remoteQuality: (NSArray< TRTCQualityInfo * > *)  remoteQuality 

Real-time network quality statistics

This callback is returned every 2 seconds and notifies you of the upstream and downstream network quality detected by the SDK. The SDK uses a built-in proprietary algorithm to assess the current latency, bandwidth, and stability of the network and returns a result. If the result is 1 (excellent), it means that the current network conditions are excellent; if it is 6 (down), it means that the current network conditions are too bad to support TRTC calls.

Attention
In the returned parameters localQuality and remoteQuality, if userId is empty, it indicates that the network quality statistics of the local user are returned. Otherwise, the network quality statistics of a remote user are returned.
Parameters
localQualityUpstream network quality
remoteQualityDownstream network quality

◆ onRecvCustomCmdMsgUserId:cmdID:seq:message:()

- (void) onRecvCustomCmdMsgUserId: (NSString *)  userId
cmdID: (NSInteger)  cmdID
seq: (UInt32)  seq
message: (NSData *)  message 

Receipt of custom message

When a user in a room uses sendCustomCmdMsg to send a custom message, other users in the room can receive the message through the onRecvCustomCmdMsg callback.

Parameters
userIdUser ID
cmdIDCommand ID
seqMessage serial number
messageMessage data

◆ onRecvSEIMsg:message:()

- (void) onRecvSEIMsg: (NSString *)  userId
message: (NSData *)  message 

Receipt of SEI message

If a user in the room uses sendSEIMsg to send an SEI message via video frames, other users in the room can receive the message through the onRecvSEIMsg callback.

Parameters
userIdUser ID
messageData

◆ onRemoteUserEnterRoom:()

- (void) onRemoteUserEnterRoom: (NSString *)  userId

A user entered the room

Due to performance concerns, this callback works differently in different scenarios (i.e., AppScene, which you can specify by setting the second parameter when calling enterRoom).

  • Live streaming scenarios (TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom): in live streaming scenarios, a user is either in the role of an anchor or audience. The callback is returned only when an anchor enters the room.
  • Call scenarios (TRTCAppSceneVideoCall or TRTCAppSceneAudioCall): in call scenarios, the concept of roles does not apply (all users can be considered as anchors), and the callback is returned when any user enters the room.
Attention
  1. The onRemoteUserEnterRoom callback indicates that a user entered the room, but it does not necessarily mean that the user enabled audio or video.
  2. If you want to know whether a user enabled video, we recommend you use the onUserVideoAvailable() callback.
Parameters
userIdUser ID of the remote user

◆ onRemoteUserLeaveRoom:reason:()

- (void) onRemoteUserLeaveRoom: (NSString *)  userId
reason: (NSInteger)  reason 

A user exited the room

As with onRemoteUserEnterRoom, this callback works differently in different scenarios (i.e., AppScene, which you can specify by setting the second parameter when calling enterRoom).

  • Live streaming scenarios (TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom): the callback is triggered only when an anchor exits the room.
  • Call scenarios (TRTCAppSceneVideoCall or TRTCAppSceneAudioCall): in call scenarios, the concept of roles does not apply, and the callback is returned when any user exits the room.
Parameters
userIdUser ID of the remote user
reasonReason for room exit. 0: the user exited the room voluntarily; 1: the user exited the room due to timeout; 2: the user was removed from the room.

◆ onRemoteVideoStatusUpdated:streamType:streamStatus:reason:extrainfo:()

- (void) onRemoteVideoStatusUpdated: (NSString *)  userId
streamType: (TRTCVideoStreamType streamType
streamStatus: (TRTCAVStatusType status
reason: (TRTCAVStatusChangeReason reason
extrainfo: (nullable NSDictionary *)  info 

Change of remote video status

You can use this callback to get the status (Playing, Loading, or Stopped) of the video of each remote user and display it on the UI.

Parameters
userIdUser ID
streamTypeVideo stream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.
statusVideo status, which may be Playing, Loading, or Stopped
reasonReason for the change of status
extraInfoExtra information

◆ onScreenCapturePaused:()

- (void) onScreenCapturePaused: (int)  reason

Screen sharing was paused

The SDK returns this callback when you call pauseScreenCapture to pause screen sharing.

Parameters
reasonReason.
  • 0: the user paused screen sharing.
  • 1: screen sharing was paused because the shared window became invisible(Mac). screen sharing was paused because setting parameters(Windows).
  • 2: screen sharing was paused because the shared window became minimum(only for Windows).
  • 3: screen sharing was paused because the shared window became invisible(only for Windows).

◆ onScreenCaptureResumed:()

- (void) onScreenCaptureResumed: (int)  reason

Screen sharing was resumed

The SDK returns this callback when you call resumeScreenCapture to resume screen sharing.

Parameters
reasonReason.
  • 0: the user resumed screen sharing.
  • 1: screen sharing was resumed automatically after the shared window became visible again(Mac). screen sharing was resumed automatically after setting parameters(Windows).
  • 2: screen sharing was resumed automatically after the shared window became minimize recovery(only for Windows).
  • 3: screen sharing was resumed automatically after the shared window became visible again(only for Windows).

◆ onScreenCaptureStarted()

- (void) onScreenCaptureStarted

Screen sharing started

The SDK returns this callback when you call startScreenCapture and other APIs to start screen sharing.

◆ onScreenCaptureStoped:()

- (void) onScreenCaptureStoped: (int)  reason

Screen sharing stopped

The SDK returns this callback when you call stopScreenCapture to stop screen sharing.

Parameters
reasonReason. 0: the user stopped screen sharing; 1: screen sharing stopped because the shared window was closed.

◆ onSendFirstLocalAudioFrame()

- (void) onSendFirstLocalAudioFrame

The first local audio frame was published

After you enter a room and call startLocalAudio to enable audio capturing (whichever happens first), the SDK will start audio encoding and publish the local audio data via its network module to the cloud. The SDK returns the onSendFirstLocalAudioFrame callback after sending the first local audio frame.

◆ onSendFirstLocalVideoFrame:()

- (void) onSendFirstLocalVideoFrame: (TRTCVideoStreamType streamType

The first local video frame was published

After you enter a room and call startLocalPreview or startScreenCapture to enable local video capturing (whichever happens first), the SDK will start video encoding and publish the local video data via its network module to the cloud. It returns the onSendFirstLocalVideoFrame callback after publishing the first local video frame.

Parameters
streamTypeVideo stream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.

◆ onSetMixTranscodingConfig:errMsg:()

- (void) onSetMixTranscodingConfig: (int)  err
errMsg: (NSString *)  errMsg 

Set the layout and transcoding parameters for On-Cloud MixTranscoding

When you call setMixTranscodingConfig to modify the layout and transcoding parameters for On-Cloud MixTranscoding, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onSpeedTestResult:()

- (void) onSpeedTestResult: (TRTCSpeedTestResult *)  result

Callback of network speed test

The callback is triggered by startSpeedTest:.

Parameters
resultSpeed test data, including loss rates, rtt and bandwidth rates, please refer to TRTCSpeedTestResult for details.

◆ onStartPublishCDNStream:errMsg:()

- (void) onStartPublishCDNStream: (int)  err
errMsg: (NSString *)  errMsg 

Started publishing to non-Tencent Cloud’s live streaming CDN

When you call startPublishCDNStream to start publishing streams to a non-Tencent Cloud’s live streaming CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Attention
If you receive a callback that the command is executed successfully, it only means that your command was sent to Tencent Cloud’s backend server. If the CDN vendor does not accept your streams, the publishing will still fail.
Parameters
err0: successful; other values: failed
errMsgError message

◆ onStartPublishing:errMsg:()

- (void) onStartPublishing: (int)  err
errMsg: (NSString *)  errMsg 

Started publishing to Tencent Cloud CSS CDN

When you call startPublishing to publish streams to Tencent Cloud CSS CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onStatistics:()

- (void) onStatistics: (TRTCStatistics *)  statistics

Real-time statistics on technical metrics

This callback is returned every 2 seconds and notifies you of the statistics on technical metrics related to video, audio, and network. The metrics are listed in TRTCStatistics:

  • Video statistics: video resolution (resolution), frame rate (FPS), bitrate (bitrate), etc.
  • Audio statistics: audio sample rate (samplerate), number of audio channels (channel), bitrate (bitrate), etc.
  • Network statistics: the round trip time (rtt) between the SDK and the cloud (SDK -> Cloud -> SDK), package loss rate (loss), upstream traffic (sentBytes), downstream traffic (receivedBytes), etc.
Attention
If you want to learn about only the current network quality and do not want to spend much time analyzing the statistics returned by this callback, we recommend you use onNetworkQuality.
Parameters
statisticsStatistics, including local statistics and the statistics of remote users. For details, please see TRTCStatistics.

◆ onStopPublishCDNStream:errMsg:()

- (void) onStopPublishCDNStream: (int)  err
errMsg: (NSString *)  errMsg 

Stopped publishing to non-Tencent Cloud’s live streaming CDN

When you call stopPublishCDNStream to stop publishing to a non-Tencent Cloud’s live streaming CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onStopPublishing:errMsg:()

- (void) onStopPublishing: (int)  err
errMsg: (NSString *)  errMsg 

Stopped publishing to Tencent Cloud CSS CDN

When you call stopPublishing to stop publishing streams to Tencent Cloud CSS CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onSwitchRole:errMsg:()

- (void) onSwitchRole: (TXLiteAVError)  errCode
errMsg: (nullable NSString *)  errMsg 

Role switching

You can call the switchRole() API in TRTCCloud to switch between the anchor and audience roles. This is accompanied by a line switching process. After the switching, the SDK will return the onSwitchRole() event callback.

Parameters
errCodeError code. ERR_NULL indicates a successful switch. For more information, please see Error Codes.
errMsgError message

◆ onSwitchRoom:errMsg:()

- (void) onSwitchRoom: (TXLiteAVError)  errCode
errMsg: (nullable NSString *)  errMsg 

Result of room switching

You can call the switchRoom() API in TRTCCloud to switch from one room to another. After the switching, the SDK will return the onSwitchRoom() event callback.

Parameters
errCodeError code. ERR_NULL indicates a successful switch. For more information, please see Error Codes.
errMsgError message

◆ onSystemAudioLoopbackError:()

- (void) onSystemAudioLoopbackError: (TXLiteAVError)  err

Whether system audio capturing is enabled successfully (for macOS only)

On macOS, you can call startSystemAudioLoopback to install an audio driver and have the SDK capture the audio played back by the system. In use cases such as video teaching and music live streaming, the teacher can use this feature to let the SDK capture the sound of the video played by his or her computer, so that students in the room can hear the sound too. The SDK returns this callback after trying to enable system audio capturing. To determine whether it is actually enabled, pay attention to the error parameter in the callback.

Parameters
errIf it is ERR_NULL, system audio capturing is enabled successfully. Otherwise, it is not.

◆ onTryToReconnect()

- (void) onTryToReconnect

The SDK is reconnecting to the cloud

When the SDK is disconnected from the cloud, it returns the onConnectionLost callback. It then attempts to reconnect and returns this callback (onTryToReconnect). After it is reconnected, it returns the onConnectionRecovery callback.

◆ onUserAudioAvailable:available:()

- (void) onUserAudioAvailable: (NSString *)  userId
available: (BOOL)  available 

A remote user published/unpublished audio

If you receive the onUserAudioAvailable(userId, YES) callback, it indicates that the user published audio.

  • In auto-subscription mode, the SDK will play the user’s audio automatically.
  • In manual subscription mode, you can call muteRemoteAudio(userid, NO) to play the user’s audio.
Attention
The auto-subscription mode is used by default. You can switch to the manual subscription mode by calling setDefaultStreamRecvMode, but it must be called before room entry for the switch to take effect.
Parameters
userIdUser ID of the remote user
availableWhether the user published (or unpublished) audio. YES: published; NO: unpublished

◆ onUserEnter:()

- (void) onUserEnter: (NSString *)  userId

An anchor entered the room (disused)

Deprecated:
This callback is not recommended in the new version. Please use onRemoteUserEnterRoom instead.

◆ onUserExit:reason:()

- (void) onUserExit: (NSString *)  userId
reason: (NSInteger)  reason 

An anchor left the room (disused)

Deprecated:
This callback is not recommended in the new version. Please use onRemoteUserLeaveRoom instead.

◆ onUserSubStreamAvailable:available:()

- (void) onUserSubStreamAvailable: (NSString *)  userId
available: (BOOL)  available 

A remote user published/unpublished substream video

The substream is usually used for screen sharing images. If you receive the onUserSubStreamAvailable(userId, YES) callback, it indicates that the user has available substream video. You can then call startRemoteSubStreamView to subscribe to the remote user’s video. If the subscription is successful, you will receive the onFirstVideoFrame(userid) callback, which indicates that the first frame of the user is rendered.

Attention
The API used to display substream images is startRemoteSubStreamView, not startRemoteView.
Parameters
userIdUser ID of the remote user
availableWhether the user published (or unpublished) substream video. YES: published; NO: unpublished

◆ onUserVideoAvailable:available:()

- (void) onUserVideoAvailable: (NSString *)  userId
available: (BOOL)  available 

A remote user published/unpublished primary stream video

The primary stream is usually used for camera images. If you receive the onUserVideoAvailable(userId, YES) callback, it indicates that the user has available primary stream video. You can then call startRemoteView to subscribe to the remote user’s video. If the subscription is successful, you will receive the onFirstVideoFrame(userid) callback, which indicates that the first video frame of the user is rendered.

If you receive the onUserVideoAvailable(userId, NO) callback, it indicates that the video of the remote user is disabled, which may be because the user called muteLocalVideo or stopLocalPreview.

Parameters
userIdUser ID of the remote user
availableWhether the user published (or unpublished) primary stream video. YES: published; NO: unpublished

◆ onUserVideoSizeChanged:streamType:newWidth:newHeight:()

- (void) onUserVideoSizeChanged: (NSString *)  userId
streamType: (TRTCVideoStreamType streamType
newWidth: (int)  newWidth
newHeight: (int)  newHeight 

Change of remote video size

If you receive the onUserVideoSizeChanged(userId, streamtype, newWidth, newHeight) callback, it indicates that the user changed the video size. It may be triggered by setVideoEncoderParam or setSubStreamEncoderParam.

Parameters
userIdUser ID
streamTypeVideo stream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.
newWidthVideo width
newHeightVideo height

◆ onUserVoiceVolume:totalVolume:()

- (void) onUserVoiceVolume: (NSArray< TRTCVolumeInfo * > *)  userVolumes
totalVolume: (NSInteger)  totalVolume 

Volume

The SDK can assess the volume of each channel and return this callback on a regular basis. You can display, for example, a waveform or volume bar on the UI based on the statistics returned. You need to first call enableAudioVolumeEvaluation to enable the feature and set the interval for the callback. Note that the SDK returns this callback at the specified interval regardless of whether someone is speaking in the room. When no one is speaking in the room, userVolumes is empty, and totalVolume is 0.

Attention
userVolumes is an array. If userId is empty, the elements in the array represent the volume of the local user’s audio. Otherwise, they represent the volume of a remote user’s audio.
Parameters
userVolumesAn array that represents the volume of all users who are speaking in the room. Value range: 0-100
totalVolumeThe total volume of all remote users. Value range: 0-100

◆ onWarning:warningMsg:extInfo:()

- (void) onWarning: (TXLiteAVWarning)  warningCode
warningMsg: (nullable NSString *)  warningMsg
extInfo: (nullable NSDictionary *)  extInfo 

Warning event callback

Warning event, which indicates that the SDK threw an error requiring attention, such as video lag or high CPU usage For more information, see Error Codes.

Parameters
warningCodeWarning code
warningMsgWarning message
extInfoExtended field. Certain warning codes may carry extra information for troubleshooting.

◆ TRTCVideoRenderDelegate-p

protocol TRTCVideoRenderDelegate-p
+ Inheritance diagram for <TRTCVideoRenderDelegate>:

Instance Methods

(void) - onRenderVideoFrame:userId:streamType:
 

Method Documentation

◆ onRenderVideoFrame:userId:streamType:()

- (void) onRenderVideoFrame: (TRTCVideoFrame *_Nonnull)  frame
userId: (NSString *__nullable)  userId
streamType: (TRTCVideoStreamType streamType 
optional

Custom video rendering

If you have configured the callback of custom rendering for local or remote video, the SDK will return to you via this callback video frames that are otherwise sent to the rendering control, so that you can customize rendering.

Parameters
frameVideo frames to be rendered
userIduserId of the video source. This parameter can be ignored if the callback is for local video (setLocalVideoRenderDelegate).
streamTypeStream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.

◆ TRTCVideoFrameDelegate-p

protocol TRTCVideoFrameDelegate-p
+ Inheritance diagram for <TRTCVideoFrameDelegate>:

Instance Methods

(uint32_t) - onProcessVideoFrame:dstFrame:
 
(void) - onGLContextDestory
 

Method Documentation

◆ onGLContextDestory()

- (void) onGLContextDestory
optional

The OpenGL context in the SDK was destroyed

◆ onProcessVideoFrame:dstFrame:()

- (uint32_t) onProcessVideoFrame: (TRTCVideoFrame *_Nonnull)  srcFrame
dstFrame: (TRTCVideoFrame *_Nonnull)  dstFrame 
optional

Video processing by third-party beauty filters

If you use a third-party beauty filter component, you need to configure this callback in TRTCCloud to have the SDK return to you video frames that are otherwise pre-processed by TRTC. You can then send the video frames to the third-party beauty filter component for processing. As the data returned can be read and modified, the result of processing can be synced to TRTC for subsequent encoding and publishing.

Parameters
srcFrameUsed to carry images captured by TRTC via the camera
dstFrameUsed to receive video images processed by third-party beauty filters
Attention
Currently, only the OpenGL texture scheme is supported(PC supports TRTCVideoBufferType_Buffer format Only)

Case 1: the beauty filter component generates new textures If the beauty filter component you use generates a frame of new texture (for the processed image) during image processing, please set dstFrame.textureId to the ID of the new texture in the callback function.

uint32_t onProcessVideoFrame(TRTCVideoFrame * _Nonnull)srcFrame dstFrame:(TRTCVideoFrame * _Nonnull)dstFrame{
    self.frameID += 1;
    dstFrame.pixelBuffer = [[FURenderer shareRenderer] renderPixelBuffer:srcFrame.pixelBuffer
                                                             withFrameId:self.frameID
                                                                   items:self.renderItems
                                                               itemCount:self.renderItems.count];
    return 0;
}

Case 2: you need to provide target textures to the beauty filter component If the third-party beauty filter component you use does not generate new textures and you need to manually set an input texture and an output texture for the component, you can consider the following scheme:

uint32_t onProcessVideoFrame(TRTCVideoFrame * _Nonnull)srcFrame dstFrame:(TRTCVideoFrame * _Nonnull)dstFrame{
thirdparty_process(srcFrame.textureId, srcFrame.width, srcFrame.height, dstFrame.textureId);
return 0;
}
int onProcessVideoFrame(TRTCCloudDef.TRTCVideoFrame srcFrame, TRTCCloudDef.TRTCVideoFrame dstFrame) {
thirdparty_process(srcFrame.texture.textureId, srcFrame.width, srcFrame.height, dstFrame.texture.textureId);
return 0;
}

◆ TRTCAudioFrameDelegate-p

protocol TRTCAudioFrameDelegate-p
+ Inheritance diagram for <TRTCAudioFrameDelegate>:

Instance Methods

(void) - onCapturedRawAudioFrame:
 
(void) - onLocalProcessedAudioFrame:
 
(void) - onRemoteUserAudioFrame:userId:
 
(void) - onMixedPlayAudioFrame:
 
(void) - onMixedAllAudioFrame:
 

Method Documentation

◆ onCapturedRawAudioFrame:()

- (void) onCapturedRawAudioFrame: (TRTCAudioFrame *)  frame
optional

Audio data captured by the local mic and pre-processed by the audio module

After you configure the callback of custom audio processing, the SDK will return via this callback the data captured and pre-processed (ANS, AEC, and AGC) in PCM format.

  • The audio returned is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
Attention
  1. Please avoid time-consuming operations in this callback function. The SDK processes an audio frame every 20 ms, so if your operation takes more than 20 ms, it will cause audio exceptions.
  2. The audio data returned via this callback can be read and modified, but please keep the duration of your operation short.
  3. The audio data is returned via this callback after ANS, AEC and AGC, but it does not include pre-processing effects like background music, audio effects, or reverb, and therefore has a short delay.

◆ onLocalProcessedAudioFrame:()

- (void) onLocalProcessedAudioFrame: (TRTCAudioFrame *)  frame
optional

Audio data captured by the local mic, pre-processed by the audio module, effect-processed and BGM-mixed

After you configure the callback of custom audio processing, the SDK will return via this callback the data captured, pre-processed (ANS, AEC, and AGC), effect-processed and BGM-mixed in PCM format, before it is submitted to the network module for encoding.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.

Instructions: You could write data to the TRTCAudioFrame.extraData filed, in order to achieve the purpose of transmitting signaling. Because the data block of the audio frame header cannot be too large, we recommend you limit the size of the signaling data to only a few bytes when using this API. If extra data more than 100 bytes, it won't be sent. Other users in the room can receive the message through the TRTCAudioFrame.extraData in onRemoteUserAudioFrame callback in TRTCAudioFrameDelegate.

Parameters
frameAudio frames in PCM format
Attention
  1. Please avoid time-consuming operations in this callback function. The SDK processes an audio frame every 20 ms, so if your operation takes more than 20 ms, it will cause audio exceptions.
  2. The audio data returned via this callback can be read and modified, but please keep the duration of your operation short.
  3. Audio data is returned via this callback after ANS, AEC, AGC, effect-processing and BGM-mixing, and therefore the delay is longer than that with onCapturedRawAudioFrame.

◆ onMixedAllAudioFrame:()

- (void) onMixedAllAudioFrame: (TRTCAudioFrame *)  frame
optional

Data mixed from all the captured and to-be-played audio in the SDK

After you configure the callback of custom audio processing, the SDK will return via this callback the data (PCM format) mixed from all captured and to-be-played audio in the SDK, so that you can customize recording.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
Attention
  1. This data returned via this callback is mixed from all audio in the SDK, including local audio after pre-processing (ANS, AEC, and AGC), special effects application, and music mixing, as well as all remote audio, but it does not include the in-ear monitoring data.
  2. The audio data returned via this callback cannot be modified.

◆ onMixedPlayAudioFrame:()

- (void) onMixedPlayAudioFrame: (TRTCAudioFrame *)  frame
optional

Data mixed from each channel before being submitted to the system for playback

After you configure the callback of custom audio processing, the SDK will return to you via this callback the data (PCM format) mixed from each channel before it is submitted to the system for playback.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
Attention
  1. Please avoid time-consuming operations in this callback function. The SDK processes an audio frame every 20 ms, so if your operation takes more than 20 ms, it will cause audio exceptions.
  2. The audio data returned via this callback can be read and modified, but please keep the duration of your operation short.
  3. The audio data returned via this callback is the audio data mixed from each channel before it is played. It does not include the in-ear monitoring data.

◆ onRemoteUserAudioFrame:userId:()

- (void) onRemoteUserAudioFrame: (TRTCAudioFrame *)  frame
userId: (NSString *)  userId 
optional

Audio data of each remote user before audio mixing

After you configure the callback of custom audio processing, the SDK will return via this callback the raw audio data (PCM format) of each remote user before mixing.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
userIdUser ID
Attention
The audio data returned via this callback can be read but not modified.

◆ TRTCLogDelegate-p

protocol TRTCLogDelegate-p
+ Inheritance diagram for <TRTCLogDelegate>:

Instance Methods

(void) - onLog:LogLevel:WhichModule:
 

Method Documentation

◆ onLog:LogLevel:WhichModule:()

- (void) onLog: (nullable NSString *)  log
LogLevel: (TRTCLogLevel level
WhichModule: (nullable NSString *)  module 
optional

Printing of local log

If you want to capture the local log printing event, you can configure the log callback to have the SDK return to you via this callback all logs that are to be printed.

Parameters
logLog content
levelLog level. For more information, please see TRTC_LOG_LEVEL.
moduleReserved field, which is not defined at the moment and has a fixed value of TXLiteAVSDK.