LiteAVSDK
Tencent Cloud TRTC SDK, is a high availability components serving tens of thousands of enterprise customers, which is committed to helping you to minimize your research and development costs.
TRTCCloudListener

Detailed Description

Tencent Cloud TRTC Event Notification Interface.

Module: TRTCCloudDelegate @ TXLiteAVSDK Function: event callback APIs for TRTC’s video call feature


Data Structure Documentation

◆ com::tencent::trtc::TRTCCloudListener::TRTCVideoRenderListener

interface com::tencent::trtc::TRTCCloudListener::TRTCVideoRenderListener

Public Member Functions

void onRenderVideoFrame (String userId, int streamType, TRTCCloudDef.TRTCVideoFrame frame)
 

Member Function Documentation

◆ onRenderVideoFrame()

void onRenderVideoFrame ( String  userId,
int  streamType,
TRTCCloudDef.TRTCVideoFrame  frame 
)

Custom video rendering

If you have configured the callback of custom rendering for local or remote video, the SDK will return to you via this callback video frames that are otherwise sent to the rendering control, so that you can customize rendering.

Parameters
frameVideo frames to be rendered
userIduserId of the video source. This parameter can be ignored if the callback is for local video (setLocalVideoRenderDelegate).
streamTypeStream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.

◆ com::tencent::trtc::TRTCCloudListener::TRTCVideoFrameListener

interface com::tencent::trtc::TRTCCloudListener::TRTCVideoFrameListener

Public Member Functions

void onGLContextCreated ()
 
int onProcessVideoFrame (TRTCCloudDef.TRTCVideoFrame srcFrame, TRTCCloudDef.TRTCVideoFrame dstFrame)
 
void onGLContextDestory ()
 

Member Function Documentation

◆ onGLContextCreated()

void onGLContextCreated ( )

An OpenGL context was created in the SDK.

◆ onGLContextDestory()

void onGLContextDestory ( )

The OpenGL context in the SDK was destroyed

◆ onProcessVideoFrame()

int onProcessVideoFrame ( TRTCCloudDef.TRTCVideoFrame  srcFrame,
TRTCCloudDef.TRTCVideoFrame  dstFrame 
)

Video processing by third-party beauty filters

If you use a third-party beauty filter component, you need to configure this callback in TRTCCloud to have the SDK return to you video frames that are otherwise pre-processed by TRTC. You can then send the video frames to the third-party beauty filter component for processing. As the data returned can be read and modified, the result of processing can be synced to TRTC for subsequent encoding and publishing.

Parameters
srcFrameUsed to carry images captured by TRTC via the camera
dstFrameUsed to receive video images processed by third-party beauty filters
Attention
Currently, only the OpenGL texture scheme is supported(PC supports TRTCVideoBufferType_Buffer format Only)

Case 1: the beauty filter component generates new textures If the beauty filter component you use generates a frame of new texture (for the processed image) during image processing, please set dstFrame.textureId to the ID of the new texture in the callback function.

private final TRTCVideoFrameListener mVideoFrameListener = new TRTCVideoFrameListener() {
    @Override
    public void onGLContextCreated() {
        mFURenderer.onSurfaceCreated();
        mFURenderer.setUseTexAsync(true);
    }
    @Override
    public int onProcessVideoFrame(TRTCVideoFrame srcFrame, TRTCVideoFrame dstFrame) {
        dstFrame.texture.textureId = mFURenderer.onDrawFrameSingleInput(srcFrame.texture.textureId, srcFrame.width, srcFrame.height);
        return 0;
    }
    @Override
    public void onGLContextDestory() {
        mFURenderer.onSurfaceDestroyed();
    }
};

Case 2: you need to provide target textures to the beauty filter component If the third-party beauty filter component you use does not generate new textures and you need to manually set an input texture and an output texture for the component, you can consider the following scheme:

uint32_t onProcessVideoFrame(TRTCVideoFrame * _Nonnull)srcFrame dstFrame:(TRTCVideoFrame * _Nonnull)dstFrame{
thirdparty_process(srcFrame.textureId, srcFrame.width, srcFrame.height, dstFrame.textureId);
return 0;
}
int onProcessVideoFrame(TRTCCloudDef.TRTCVideoFrame srcFrame, TRTCCloudDef.TRTCVideoFrame dstFrame) {
thirdparty_process(srcFrame.texture.textureId, srcFrame.width, srcFrame.height, dstFrame.texture.textureId);
return 0;
}

◆ com::tencent::trtc::TRTCCloudListener::TRTCAudioFrameListener

interface com::tencent::trtc::TRTCCloudListener::TRTCAudioFrameListener

Public Member Functions

void onCapturedRawAudioFrame (TRTCCloudDef.TRTCAudioFrame frame)
 
void onLocalProcessedAudioFrame (TRTCCloudDef.TRTCAudioFrame frame)
 
void onRemoteUserAudioFrame (TRTCCloudDef.TRTCAudioFrame frame, String userId)
 
void onMixedPlayAudioFrame (TRTCCloudDef.TRTCAudioFrame frame)
 
void onMixedAllAudioFrame (TRTCCloudDef.TRTCAudioFrame frame)
 

Member Function Documentation

◆ onCapturedRawAudioFrame()

void onCapturedRawAudioFrame ( TRTCCloudDef.TRTCAudioFrame  frame)

Raw audio data captured locally

After you configure the callback of custom audio processing, the SDK will return to you via this callback the raw audio data (PCM format) captured by the mic.

  • The audio returned is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
Attention
  1. Please avoid time-consuming operations in this callback function. The SDK processes an audio frame every 20 ms, so if your operation takes more than 20 ms, it will cause audio exceptions.
  2. The audio data returned via this callback can be read and modified, but please keep the duration of your operation short.
  3. The audio data returned via this callback does not include pre-processing effects like background music, audio effects, or reverb, and therefore has a very short delay.

◆ onLocalProcessedAudioFrame()

void onLocalProcessedAudioFrame ( TRTCCloudDef.TRTCAudioFrame  frame)

Audio data captured by the local mic and pre-processed by the audio module

After you configure the callback of custom audio processing, the SDK will return via this callback the data captured and pre-processed (ANS, AEC, and AGC) in PCM format.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.

Instructions: You could write data to the TRTCAudioFrame.extraData filed, in order to achieve the purpose of transmitting signaling. Because the data block of the audio frame header cannot be too large, we recommend you limit the size of the signaling data to only a few bytes when using this API. If extra data more than 100 bytes, it won't be sent. Other users in the room can receive the message through the TRTCAudioFrame.extraData in onRemoteUserAudioFrame callback in TRTCAudioFrameDelegate.

Parameters
frameAudio frames in PCM format
Attention
  1. Please avoid time-consuming operations in this callback function. The SDK processes an audio frame every 20 ms, so if your operation takes more than 20 ms, it will cause audio exceptions.
  2. The audio data returned via this callback can be read and modified, but please keep the duration of your operation short.
  3. Audio data is returned via this callback after AEC, but the delay is longer than that with onCapturedRawAudioFrame.

◆ onMixedAllAudioFrame()

void onMixedAllAudioFrame ( TRTCCloudDef.TRTCAudioFrame  frame)

Data mixed from all the captured and to-be-played audio in the SDK

After you configure the callback of custom audio processing, the SDK will return via this callback the data (PCM format) mixed from all captured and to-be-played audio in the SDK, so that you can customize recording.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
Attention
  1. This data returned via this callback is mixed from all audio in the SDK, including local audio after pre-processing (ANS, AEC, and AGC), special effects application, and music mixing, as well as all remote audio, but it does not include the in-ear monitoring data.
  2. The audio data returned via this callback cannot be modified.

◆ onMixedPlayAudioFrame()

void onMixedPlayAudioFrame ( TRTCCloudDef.TRTCAudioFrame  frame)

Data mixed from each channel before being submitted to the system for playback

After you configure the callback of custom audio processing, the SDK will return to you via this callback the data (PCM format) mixed from each channel before it is submitted to the system for playback.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
Attention
  1. Please avoid time-consuming operations in this callback function. The SDK processes an audio frame every 20 ms, so if your operation takes more than 20 ms, it will cause audio exceptions.
  2. The audio data returned via this callback can be read and modified, but please keep the duration of your operation short.
  3. The audio data returned via this callback is the audio data mixed from each channel before it is played. It does not include the in-ear monitoring data.

◆ onRemoteUserAudioFrame()

void onRemoteUserAudioFrame ( TRTCCloudDef.TRTCAudioFrame  frame,
String  userId 
)

Audio data of each remote user before audio mixing

After you configure the callback of custom audio processing, the SDK will return via this callback the raw audio data (PCM format) of each remote user before mixing.

  • The audio data returned via this callback is in PCM format and has a fixed frame length (time) of 0.02s.
  • The formula to convert a frame length in seconds to one in bytes is sample rate * frame length in seconds * number of sound channels * audio bit depth.
  • Assume that the audio is recorded on a single channel with a sample rate of 48,000 Hz and audio bit depth of 16 bits, which are the default settings of TRTC. The frame length in bytes will be 48000 *s * 1 * 16 bits = 15360 bits = 1920 bytes.
Parameters
frameAudio frames in PCM format
userIdUser ID
Attention
The audio data returned via this callback can be read but not modified.

◆ com::tencent::trtc::TRTCCloudListener::TRTCLogListener

class com::tencent::trtc::TRTCCloudListener::TRTCLogListener

Public Member Functions

abstract void onLog (String log, int level, String module)
 

Member Function Documentation

◆ onLog()

abstract void onLog ( String  log,
int  level,
String  module 
)
abstract

Printing of local log

If you want to capture the local log printing event, you can configure the log callback to have the SDK return to you via this callback all logs that are to be printed.

Parameters
logLog content
levelLog level. For more information, please see TRTC_LOG_LEVEL.
moduleReserved field, which is not defined at the moment and has a fixed value of TXLiteAVSDK.

◆ com::tencent::trtc::TRTCCloudListener::TRTCSnapshotListener

interface com::tencent::trtc::TRTCCloudListener::TRTCSnapshotListener

Public Member Functions

void onSnapshotComplete (Bitmap bmp)
 

Member Function Documentation

◆ onSnapshotComplete()

void onSnapshotComplete ( Bitmap  bmp)

Finished taking a screenshot

Parameters
bmpScreenshot result. If it is null, the screenshot failed to be taken.

◆ com::tencent::trtc::TRTCCloudListener

class com::tencent::trtc::TRTCCloudListener

Error and warning events

void onError (int errCode, String errMsg, Bundle extraInfo)
 
void onWarning (int warningCode, String warningMsg, Bundle extraInfo)
 

Room event callback

void onEnterRoom (long result)
 
void onExitRoom (int reason)
 
void onSwitchRole (final int errCode, final String errMsg)
 
void onSwitchRoom (final int errCode, final String errMsg)
 
void onConnectOtherRoom (final String userId, final int errCode, final String errMsg)
 
void onDisConnectOtherRoom (final int errCode, final String errMsg)
 

User event callback

void onRemoteUserEnterRoom (String userId)
 
void onRemoteUserLeaveRoom (String userId, int reason)
 
void onUserVideoAvailable (String userId, boolean available)
 
void onUserSubStreamAvailable (String userId, boolean available)
 
void onUserAudioAvailable (String userId, boolean available)
 
void onFirstVideoFrame (String userId, int streamType, int width, int height)
 
void onFirstAudioFrame (String userId)
 
void onSendFirstLocalVideoFrame (int streamType)
 
void onSendFirstLocalAudioFrame ()
 
void onRemoteVideoStatusUpdated (String userId, int streamType, int status, int reason, Bundle extraInfo)
 

Callback of statistics on network and technical metrics

void onNetworkQuality (TRTCCloudDef.TRTCQuality localQuality, ArrayList< TRTCCloudDef.TRTCQuality > remoteQuality)
 
void onStatistics (TRTCStatistics statistics)
 

Callback of connection to the cloud

void onConnectionLost ()
 
void onTryToReconnect ()
 
void onConnectionRecovery ()
 
void onSpeedTest (TRTCCloudDef.TRTCSpeedTestResult currentResult, int finishedCount, int totalCount)
 

Callback of hardware events

void onCameraDidReady ()
 
void onMicDidReady ()
 
void onAudioRouteChanged (int newRoute, int oldRoute)
 
void onUserVoiceVolume (ArrayList< TRTCCloudDef.TRTCVolumeInfo > userVolumes, int totalVolume)
 

Callback of the receipt of a custom message

void onRecvCustomCmdMsg (String userId, int cmdID, int seq, byte[] message)
 
void onMissCustomCmdMsg (String userId, int cmdID, int errCode, int missed)
 
void onRecvSEIMsg (String userId, byte[] data)
 

CDN event callback

void onStartPublishing (int err, String errMsg)
 
void onStopPublishing (int err, String errMsg)
 
void onStartPublishCDNStream (int err, String errMsg)
 
void onStopPublishCDNStream (int err, String errMsg)
 
void onSetMixTranscodingConfig (int err, String errMsg)
 

Screen sharing event callback

void onScreenCaptureStarted ()
 
void onScreenCapturePaused ()
 
void onScreenCaptureResumed ()
 
void onScreenCaptureStopped (int reason)
 

Callback of local recording and screenshot events

void onLocalRecordBegin (int errCode, String storagePath)
 
void onLocalRecording (long duration, String storagePath)
 
void onLocalRecordComplete (int errCode, String storagePath)
 

Disused callbacks (please use the new ones)

void onUserEnter (String userId)
 
void onUserExit (String userId, int reason)
 
void onAudioEffectFinished (int effectId, int code)
 

Member Function Documentation

◆ onAudioEffectFinished()

void onAudioEffectFinished ( int  effectId,
int  code 
)
inline

Audio effects ended (disused)

Deprecated:
This callback is not recommended in the new version. Please use ITXAudioEffectManager instead. Audio effects and background music can be started using the same API (startPlayMusic) now instead of separate ones.

◆ onAudioRouteChanged()

void onAudioRouteChanged ( int  newRoute,
int  oldRoute 
)
inline

The audio route changed (for mobile devices only)

Audio route is the route (speaker or receiver) through which audio is played.

  • When audio is played through the receiver, the volume is relatively low, and the sound can be heard only when the phone is put near the ear. This mode has a high level of privacy and is suitable for answering calls.
  • When audio is played through the speaker, the volume is relatively high, and there is no need to put the phone near the ear. This mode enables the "hands-free" feature.
Parameters
routeAudio route, i.e., the route (speaker or receiver) through which audio is played
fromRouteThe audio route used before the change

◆ onCameraDidReady()

void onCameraDidReady ( )
inline

The camera is ready

After you call startLocalPreivew, the SDK will try to start the camera and return this callback if the camera is started. If it fails to start the camera, it’s probably because the application does not have access to the camera or the camera is being used. You can capture the onError callback to learn about the exception and let users know via UI messages.

◆ onConnectionLost()

void onConnectionLost ( )
inline

The SDK was disconnected from the cloud

The SDK returns this callback when it is disconnected from the cloud, which may be caused by network unavailability or change of network, for example, when the user walks into an elevator. After returning this callback, the SDK will attempt to reconnect to the cloud, and will return the onTryToReconnect callback. When it is reconnected, it will return the onConnectionRecovery callback. In other words, the SDK proceeds from one event to the next in the following order:

        [onConnectionLost] =====> [onTryToReconnect] =====> [onConnectionRecovery]
              /|\                                                     |
               |------------------------------------------------------|

◆ onConnectionRecovery()

void onConnectionRecovery ( )
inline

The SDK is reconnected to the cloud

When the SDK is disconnected from the cloud, it returns the onConnectionLost callback. It then attempts to reconnect and returns the onTryToReconnect callback. After it is reconnected, it returns this callback ({}).

◆ onConnectOtherRoom()

void onConnectOtherRoom ( final String  userId,
final int  errCode,
final String  errMsg 
)
inline

Result of requesting cross-room call

You can call the connectOtherRoom() API in TRTCCloud to establish a video call with the anchor of another room. This is the “anchor competition” feature. The caller will receive the onConnectOtherRoom() callback, which can be used to determine whether the cross-room call is successful. If it is successful, all users in either room will receive the onUserVideoAvailable() callback from the anchor of the other room.

Parameters
userIdThe user ID of the anchor (in another room) to be called
errCodeError code. ERR_NULL indicates that cross-room connection is established successfully. For more information, please see Error Codes.
errMsgError message

◆ onDisConnectOtherRoom()

void onDisConnectOtherRoom ( final int  errCode,
final String  errMsg 
)
inline

Result of ending cross-room call

◆ onEnterRoom()

void onEnterRoom ( long  result)
inline

Whether room entry is successful

After calling the enterRoom() API in TRTCCloud to enter a room, you will receive the onEnterRoom(result) callback from TRTCCloudDelegate.

  • If room entry succeeded, result will be a positive number (result > 0), indicating the time in milliseconds (ms) the room entry takes.
  • If room entry failed, result will be a negative number (result < 0), indicating the error code for the failure. For more information on the error codes for room entry failure, see Error Codes.
Attention
  1. In TRTC versions below 6.6, the onEnterRoom(result) callback is returned only if room entry succeeds, and the onError() callback is returned if room entry fails.
  2. In TRTC 6.6 and above, the onEnterRoom(result) callback is returned regardless of whether room entry succeeds or fails, and the onError() callback is also returned if room entry fails.
Parameters
resultIf result is greater than 0, it indicates the time (in ms) the room entry takes; if result is less than 0, it represents the error code for room entry.

◆ onError()

void onError ( int  errCode,
String  errMsg,
Bundle  extraInfo 
)
inline

Error event callback

Error event, which indicates that the SDK threw an irrecoverable error such as room entry failure or failure to start device For more information, see Error Codes.

Parameters
errCodeError code
errMsgError message
extInfoExtended field. Certain error codes may carry extra information for troubleshooting.

◆ onExitRoom()

void onExitRoom ( int  reason)
inline

Room exit

Calling the exitRoom() API in TRTCCloud will trigger the execution of room exit-related logic, such as releasing resources of audio/video devices and codecs. After all resources occupied by the SDK are released, the SDK will return the onExitRoom() callback.

If you need to call enterRoom() again or switch to another audio/video SDK, please wait until you receive the onExitRoom() callback. Otherwise, you may encounter problems such as the camera or mic being occupied.

Parameters
reasonReason for room exit. 0: the user called exitRoom to exit the room; 1: the user was removed from the room by the server; 2: the room was dismissed.

◆ onFirstAudioFrame()

void onFirstAudioFrame ( String  userId)
inline

The SDK started playing the first audio frame of a remote user

The SDK returns this callback when it plays the first audio frame of a remote user. The callback is not returned for the playing of the first audio frame of the local user.

Parameters
userIdUser ID of the remote user

◆ onFirstVideoFrame()

void onFirstVideoFrame ( String  userId,
int  streamType,
int  width,
int  height 
)
inline

The SDK started rendering the first video frame of the local or a remote user

The SDK returns this event callback when it starts rendering your first video frame or that of a remote user. The userId in the callback can help you determine whether the frame is yours or a remote user’s.

  • If userId is empty, it indicates that the SDK has started rendering your first video frame. The precondition is that you have called startLocalPreview or startScreenCapture.
  • If userId is not empty, it indicates that the SDK has started rendering the first video frame of a remote user. The precondition is that you have called startRemoteView to subscribe to the user’s video.
Attention
  1. The callback of the first local video frame being rendered is triggered only after you call startLocalPreview or startScreenCapture.
  2. The callback of the first video frame of a remote user being rendered is triggered only after you call startRemoteView or startRemoteSubStreamView.
Parameters
userIdThe user ID of the local or a remote user. If it is empty, it indicates that the first local video frame is available; if it is not empty, it indicates that the first video frame of a remote user is available.
streamTypeVideo stream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.
widthVideo width
heightVideo height

◆ onLocalRecordBegin()

void onLocalRecordBegin ( int  errCode,
String  storagePath 
)
inline

Local recording started

When you call startLocalRecording to start local recording, the SDK returns this callback to notify you whether recording is started successfully.

Parameters
errCodeError code. 0: recording started successfully; -1: failed to start recording; -2: incorrect file extension
storagePathStorage path of recording file

◆ onLocalRecordComplete()

void onLocalRecordComplete ( int  errCode,
String  storagePath 
)
inline

Local recording stopped

When you call stopLocalRecording to stop local recording, the SDK returns this callback to notify you of the recording result.

Parameters
errCodeError code. 0: recording succeeded; -1: recording failed; -2: recording was ended due to change of resolution or switch between the landscape and portrait mode.
storagePathStorage path of recording file

◆ onLocalRecording()

void onLocalRecording ( long  duration,
String  storagePath 
)
inline

Local media is being recorded

The SDK returns this callback regularly after local recording is started successfully via the calling of startLocalRecording. You can capture this callback to stay up to date with the status of the recording task. You can set the callback interval when calling startLocalRecording.

Parameters
durationCumulative duration of recording, in milliseconds
storagePathStorage path of recording file

◆ onMicDidReady()

void onMicDidReady ( )
inline

The mic is ready

After you call startLocalAudio, the SDK will try to start the mic and return this callback if the mic is started. If it fails to start the mic, it’s probably because the application does not have access to the mic or the mic is being used. You can capture the onError callback to learn about the exception and let users know via UI messages.

◆ onMissCustomCmdMsg()

void onMissCustomCmdMsg ( String  userId,
int  cmdID,
int  errCode,
int  missed 
)
inline

Loss of custom message

When you use sendCustomCmdMsg to send a custom UDP message, even if you enable reliable transfer (by setting reliable to true), there is still a chance of message loss. Reliable transfer only helps maintain a low probability of message loss, which meets the reliability requirements in most cases. If the sender sets reliable to true, the SDK will use this callback to notify the recipient of the number of custom messages lost during a specified time period (usually 5s) in the past.

Attention
The recipient receives this callback only if the sender sets reliable to true.
Parameters
userIdUser ID
cmdIDCommand ID
errCodeError code
missedNumber of lost messages

◆ onNetworkQuality()

void onNetworkQuality ( TRTCCloudDef.TRTCQuality  localQuality,
ArrayList< TRTCCloudDef.TRTCQuality remoteQuality 
)
inline

Real-time network quality statistics

This callback is returned every 2 seconds and notifies you of the upstream and downstream network quality detected by the SDK. The SDK uses a built-in proprietary algorithm to assess the current latency, bandwidth, and stability of the network and returns a result. If the result is 1 (excellent), it means that the current network conditions are excellent; if it is 6 (down), it means that the current network conditions are too bad to support TRTC calls.

Attention
In the returned parameters localQuality and remoteQuality, if userId is empty, it indicates that the network quality statistics of the local user are returned. Otherwise, the network quality statistics of a remote user are returned.
Parameters
localQualityUpstream network quality
remoteQualityDownstream network quality

◆ onRecvCustomCmdMsg()

void onRecvCustomCmdMsg ( String  userId,
int  cmdID,
int  seq,
byte[]  message 
)
inline

Receipt of custom message

When a user in a room uses sendCustomCmdMsg to send a custom message, other users in the room can receive the message through the onRecvCustomCmdMsg callback.

Parameters
userIdUser ID
cmdIDCommand ID
seqMessage serial number
messageMessage data

◆ onRecvSEIMsg()

void onRecvSEIMsg ( String  userId,
byte[]  data 
)
inline

Receipt of SEI message

If a user in the room uses sendSEIMsg to send an SEI message via video frames, other users in the room can receive the message through the onRecvSEIMsg callback.

Parameters
userIdUser ID
messageData

◆ onRemoteUserEnterRoom()

void onRemoteUserEnterRoom ( String  userId)
inline

A user entered the room

Due to performance concerns, this callback works differently in different scenarios (i.e., AppScene, which you can specify by setting the second parameter when calling enterRoom).

  • Live streaming scenarios (TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom): in live streaming scenarios, a user is either in the role of an anchor or audience. The callback is returned only when an anchor enters the room.
  • Call scenarios (TRTCAppSceneVideoCall or TRTCAppSceneAudioCall): in call scenarios, the concept of roles does not apply (all users can be considered as anchors), and the callback is returned when any user enters the room.
Attention
  1. The onRemoteUserEnterRoom callback indicates that a user entered the room, but it does not necessarily mean that the user enabled audio or video.
  2. If you want to know whether a user enabled video, we recommend you use the onUserVideoAvailable() callback.
Parameters
userIdUser ID of the remote user

◆ onRemoteUserLeaveRoom()

void onRemoteUserLeaveRoom ( String  userId,
int  reason 
)
inline

A user exited the room

As with onRemoteUserEnterRoom, this callback works differently in different scenarios (i.e., AppScene, which you can specify by setting the second parameter when calling enterRoom).

  • Live streaming scenarios (TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom): the callback is triggered only when an anchor exits the room.
  • Call scenarios (TRTCAppSceneVideoCall or TRTCAppSceneAudioCall): in call scenarios, the concept of roles does not apply, and the callback is returned when any user exits the room.
Parameters
userIdUser ID of the remote user
reasonReason for room exit. 0: the user exited the room voluntarily; 1: the user exited the room due to timeout; 2: the user was removed from the room.

◆ onRemoteVideoStatusUpdated()

void onRemoteVideoStatusUpdated ( String  userId,
int  streamType,
int  status,
int  reason,
Bundle  extraInfo 
)
inline

Change of remote video status

You can use this callback to get the status (Playing, Loading, or Stopped) of the video of each remote user and display it on the UI.

Parameters
userIdUser ID
streamTypeVideo stream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.
statusVideo status, which may be Playing, Loading, or Stopped
reasonReason for the change of status
extraInfoExtra information

◆ onScreenCapturePaused()

void onScreenCapturePaused ( )
inline

Screen sharing was paused

The SDK returns this callback when you call pauseScreenCapture to pause screen sharing.

◆ onScreenCaptureResumed()

void onScreenCaptureResumed ( )
inline

Screen sharing was resumed

The SDK returns this callback when you call resumeScreenCapture to resume screen sharing.

◆ onScreenCaptureStarted()

void onScreenCaptureStarted ( )
inline

Screen sharing started

The SDK returns this callback when you call startScreenCapture and other APIs to start screen sharing.

◆ onScreenCaptureStopped()

void onScreenCaptureStopped ( int  reason)
inline

Screen sharing stopped

The SDK returns this callback when you call stopScreenCapture to stop screen sharing.

Parameters
reasonReason. 0: the user stopped screen sharing; 1: screen sharing stopped because the shared window was closed.

◆ onSendFirstLocalAudioFrame()

void onSendFirstLocalAudioFrame ( )
inline

The first local audio frame was published

After you enter a room and call startLocalAudio to enable audio capturing (whichever happens first), the SDK will start audio encoding and publish the local audio data via its network module to the cloud. The SDK returns the onSendFirstLocalAudioFrame callback after sending the first local audio frame.

◆ onSendFirstLocalVideoFrame()

void onSendFirstLocalVideoFrame ( int  streamType)
inline

The first local video frame was published

After you enter a room and call startLocalPreview or startScreenCapture to enable local video capturing (whichever happens first), the SDK will start video encoding and publish the local video data via its network module to the cloud. It returns the onSendFirstLocalVideoFrame callback after publishing the first local video frame.

Parameters
streamTypeVideo stream type. The primary stream (Main) is usually used for camera images, and the substream (Sub) for screen sharing images.

◆ onSetMixTranscodingConfig()

void onSetMixTranscodingConfig ( int  err,
String  errMsg 
)
inline

Set the layout and transcoding parameters for On-Cloud MixTranscoding

When you call setMixTranscodingConfig to modify the layout and transcoding parameters for On-Cloud MixTranscoding, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onSpeedTest()

void onSpeedTest ( TRTCCloudDef.TRTCSpeedTestResult  currentResult,
int  finishedCount,
int  totalCount 
)
inline

Result of server speed testing

After you call startSpeedTest to start server speed testing, the SDK will return the testing results multiple times. The SDK tests the speed of multiple servers and returns the result for each IP via this callback.

Parameters
currentResultResult of the current test
finishedCountNumber of servers that have been tested
totalCountTotal number of servers to test

◆ onStartPublishCDNStream()

void onStartPublishCDNStream ( int  err,
String  errMsg 
)
inline

Started publishing to non-Tencent Cloud’s live streaming CDN

When you call startPublishCDNStream to start publishing streams to a non-Tencent Cloud’s live streaming CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Attention
If you receive a callback that the command is executed successfully, it only means that your command was sent to Tencent Cloud’s backend server. If the CDN vendor does not accept your streams, the publishing will still fail.
Parameters
err0: successful; other values: failed
errMsgError message

◆ onStartPublishing()

void onStartPublishing ( int  err,
String  errMsg 
)
inline

Started publishing to Tencent Cloud CSS CDN

When you call startPublishing to publish streams to Tencent Cloud CSS CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onStatistics()

void onStatistics ( TRTCStatistics  statistics)
inline

Real-time statistics on technical metrics

This callback is returned every 2 seconds and notifies you of the statistics on technical metrics related to video, audio, and network. The metrics are listed in TRTCStatistics:

  • Video statistics: video resolution (resolution), frame rate (FPS), bitrate (bitrate), etc.
  • Audio statistics: audio sample rate (samplerate), number of audio channels (channel), bitrate (bitrate), etc.
  • Network statistics: the round trip time (rtt) between the SDK and the cloud (SDK -> Cloud -> SDK), package loss rate (loss), upstream traffic (sentBytes), downstream traffic (receivedBytes), etc.
Attention
If you want to learn about only the current network quality and do not want to spend much time analyzing the statistics returned by this callback, we recommend you use onNetworkQuality.
Parameters
statisticsStatistics, including local statistics and the statistics of remote users. For details, please see TRTCStatistics.

◆ onStopPublishCDNStream()

void onStopPublishCDNStream ( int  err,
String  errMsg 
)
inline

Stopped publishing to non-Tencent Cloud’s live streaming CDN

When you call stopPublishCDNStream to stop publishing to a non-Tencent Cloud’s live streaming CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onStopPublishing()

void onStopPublishing ( int  err,
String  errMsg 
)
inline

Stopped publishing to Tencent Cloud CSS CDN

When you call stopPublishing to stop publishing streams to Tencent Cloud CSS CDN, the SDK will sync the command to the CVM immediately. The SDK will then receive the execution result from the CVM and return the result to you via this callback.

Parameters
err0: successful; other values: failed
errMsgError message

◆ onSwitchRole()

void onSwitchRole ( final int  errCode,
final String  errMsg 
)
inline

Role switching

You can call the switchRole() API in TRTCCloud to switch between the anchor and audience roles. This is accompanied by a line switching process. After the switching, the SDK will return the onSwitchRole() event callback.

Parameters
errCodeError code. ERR_NULL indicates a successful switch. For more information, please see Error Codes.
errMsgError message

◆ onSwitchRoom()

void onSwitchRoom ( final int  errCode,
final String  errMsg 
)
inline

Result of room switching

You can call the switchRoom() API in TRTCCloud to switch from one room to another. After the switching, the SDK will return the onSwitchRoom() event callback.

Parameters
errCodeError code. ERR_NULL indicates a successful switch. For more information, please see Error Codes.
errMsgError message

◆ onTryToReconnect()

void onTryToReconnect ( )
inline

The SDK is reconnecting to the cloud

When the SDK is disconnected from the cloud, it returns the onConnectionLost callback. It then attempts to reconnect and returns this callback (onTryToReconnect). After it is reconnected, it returns the {} callback.

◆ onUserAudioAvailable()

void onUserAudioAvailable ( String  userId,
boolean  available 
)
inline

A remote user published/unpublished audio

If you receive the onUserAudioAvailable(userId, true) callback, it indicates that the user published audio.

  • In auto-subscription mode, the SDK will play the user’s audio automatically.
  • In manual subscription mode, you can call muteRemoteAudio(userid, false) to play the user’s audio.
Attention
The auto-subscription mode is used by default. You can switch to the manual subscription mode by calling setDefaultStreamRecvMode, but it must be called before room entry for the switch to take effect.
Parameters
userIdUser ID of the remote user
availableWhether the user published (or unpublished) audio. true: published; false: unpublished

◆ onUserEnter()

void onUserEnter ( String  userId)
inline

An anchor entered the room (disused)

Deprecated:
This callback is not recommended in the new version. Please use onRemoteUserEnterRoom instead.

◆ onUserExit()

void onUserExit ( String  userId,
int  reason 
)
inline

An anchor left the room (disused)

Deprecated:
This callback is not recommended in the new version. Please use onRemoteUserLeaveRoom instead.

◆ onUserSubStreamAvailable()

void onUserSubStreamAvailable ( String  userId,
boolean  available 
)
inline

A remote user published/unpublished substream video

The substream is usually used for screen sharing images. If you receive the onUserSubStreamAvailable(userId, true) callback, it indicates that the user has available substream video. You can then call startRemoteSubStreamView to subscribe to the remote user’s video. If the subscription is successful, you will receive the onFirstVideoFrame(userid) callback, which indicates that the first frame of the user is rendered.

Attention
The API used to display substream images is startRemoteSubStreamView, not startRemoteView.
Parameters
userIdUser ID of the remote user
availableWhether the user published (or unpublished) substream video. true: published; false: unpublished

◆ onUserVideoAvailable()

void onUserVideoAvailable ( String  userId,
boolean  available 
)
inline

A remote user published/unpublished primary stream video

The primary stream is usually used for camera images. If you receive the onUserVideoAvailable(userId, true) callback, it indicates that the user has available primary stream video. You can then call startRemoteView to subscribe to the remote user’s video. If the subscription is successful, you will receive the onFirstVideoFrame(userid) callback, which indicates that the first video frame of the user is rendered.

If you receive the onUserVideoAvailable(userId, false) callback, it indicates that the video of the remote user is disabled, which may be because the user called muteLocalVideo or stopLocalPreview.

Parameters
userIdUser ID of the remote user
availableWhether the user published (or unpublished) primary stream video. true: published; false: unpublished

◆ onUserVoiceVolume()

void onUserVoiceVolume ( ArrayList< TRTCCloudDef.TRTCVolumeInfo >  userVolumes,
int  totalVolume 
)
inline

Volume

The SDK can assess the volume of each channel and return this callback on a regular basis. You can display, for example, a waveform or volume bar on the UI based on the statistics returned. You need to first call enableAudioVolumeEvaluation to enable the feature and set the interval for the callback. Note that the SDK returns this callback at the specified interval regardless of whether someone is speaking in the room. When no one is speaking in the room, userVolumes is empty, and totalVolume is 0.

Attention
userVolumes is an array. If userId is empty, the elements in the array represent the volume of the local user’s audio. Otherwise, they represent the volume of a remote user’s audio.
Parameters
userVolumesAn array that represents the volume of all users who are speaking in the room. Value range: 0-100
totalVolumeThe total volume of all remote users. Value range: 0-100

◆ onWarning()

void onWarning ( int  warningCode,
String  warningMsg,
Bundle  extraInfo 
)
inline

Warning event callback

Warning event, which indicates that the SDK threw an error requiring attention, such as video lag or high CPU usage For more information, see Error Codes.

Parameters
warningCodeWarning code
warningMsgWarning message
extInfoExtended field. Certain warning codes may carry extra information for troubleshooting.