LiteAVSDK
Tencent Cloud TRTC SDK, is a high availability components serving tens of thousands of enterprise customers, which is committed to helping you to minimize your research and development costs.
TRTCCloud

Detailed Description

Tencent Cloud TRTC Core Function Interface.


Data Structure Documentation

◆ com::tencent::trtc::TRTCCloud::TRTCViewMargin

class com::tencent::trtc::TRTCCloud::TRTCViewMargin

Public Member Functions

 TRTCViewMargin (float leftMargin, float rightMargin, float topMargin, float bottomMargin)
 

Data Fields

float leftMargin = 0.0f
 
float topMargin = 0.0f
 
float rightMargin = 0.0f
 
float bottomMargin = 0.0f
 

Constructor & Destructor Documentation

◆ TRTCViewMargin()

TRTCViewMargin ( float  leftMargin,
float  rightMargin,
float  topMargin,
float  bottomMargin 
)
inline

Field Documentation

◆ bottomMargin

float bottomMargin = 0.0f

◆ leftMargin

float leftMargin = 0.0f

◆ rightMargin

float rightMargin = 0.0f

◆ topMargin

float topMargin = 0.0f

◆ com::tencent::trtc::TRTCCloud::BGMNotify

interface com::tencent::trtc::TRTCCloud::BGMNotify

Public Member Functions

void onBGMStart (int errCode)
 
void onBGMProgress (long progress, long duration)
 
void onBGMComplete (int err)
 

Member Function Documentation

◆ onBGMComplete()

void onBGMComplete ( int  err)

◆ onBGMProgress()

void onBGMProgress ( long  progress,
long  duration 
)

◆ onBGMStart()

void onBGMStart ( int  errCode)

◆ com::tencent::trtc::TRTCCloud

class com::tencent::trtc::TRTCCloud

Create Instance And Event Callback

static TRTCCloud sharedInstance (Context context)
 
static void destroySharedInstance ()
 
abstract void setListener (TRTCCloudListener listener)
 
abstract void setListenerHandler (Handler listenerHandler)
 

Room APIs

abstract void enterRoom (TRTCCloudDef.TRTCParams param, int scene)
 
abstract void exitRoom ()
 
abstract void switchRole (int role)
 
abstract void switchRoom (final TRTCCloudDef.TRTCSwitchRoomConfig config)
 
abstract void ConnectOtherRoom (String param)
 
abstract void DisconnectOtherRoom ()
 
abstract void setDefaultStreamRecvMode (boolean autoRecvAudio, boolean autoRecvVideo)
 
abstract TRTCCloud createSubCloud ()
 
abstract void destroySubCloud (final TRTCCloud subCloud)
 
abstract void startPublishing (final String streamId, final int streamType)
 
abstract void stopPublishing ()
 
abstract void startPublishCDNStream (TRTCCloudDef.TRTCPublishCDNParam param)
 
abstract void stopPublishCDNStream ()
 
abstract void setMixTranscodingConfig (TRTCCloudDef.TRTCTranscodingConfig config)
 

Video APIs

abstract void startLocalPreview (boolean frontCamera, TXCloudVideoView view)
 
abstract void updateLocalView (TXCloudVideoView view)
 
abstract void stopLocalPreview ()
 
abstract void muteLocalVideo (int streamType, boolean mute)
 
abstract void setVideoMuteImage (Bitmap image, int fps)
 
abstract void startRemoteView (String userId, int streamType, TXCloudVideoView view)
 
abstract void updateRemoteView (String userId, int streamType, TXCloudVideoView view)
 
abstract void stopRemoteView (String userId, int streamType)
 
abstract void stopAllRemoteView ()
 
abstract void muteRemoteVideoStream (String userId, int streamType, boolean mute)
 
abstract void muteAllRemoteVideoStreams (boolean mute)
 
abstract void setVideoEncoderParam (TRTCCloudDef.TRTCVideoEncParam param)
 
abstract void setNetworkQosParam (TRTCCloudDef.TRTCNetworkQosParam param)
 
abstract void setLocalRenderParams (TRTCCloudDef.TRTCRenderParams renderParams)
 
abstract void setRemoteRenderParams (String userId, int streamType, TRTCCloudDef.TRTCRenderParams renderParams)
 
abstract void setVideoEncoderRotation (int rotation)
 
abstract void setVideoEncoderMirror (boolean mirror)
 
abstract void setGSensorMode (int mode)
 
abstract int enableEncSmallVideoStream (boolean enable, TRTCCloudDef.TRTCVideoEncParam smallVideoEncParam)
 
abstract int setRemoteVideoStreamType (String userId, int streamType)
 
abstract void snapshotVideo (String userId, int streamType, TRTCCloudListener.TRTCSnapshotListener listener)
 

Audio APIs

abstract void startLocalAudio (int quality)
 
abstract void stopLocalAudio ()
 
abstract void muteLocalAudio (boolean mute)
 
abstract void muteRemoteAudio (String userId, boolean mute)
 
abstract void muteAllRemoteAudio (boolean mute)
 
abstract void setAudioRoute (int route)
 
abstract void setRemoteAudioVolume (String userId, int volume)
 
abstract void setAudioCaptureVolume (int volume)
 
abstract int getAudioCaptureVolume ()
 
abstract void setAudioPlayoutVolume (int volume)
 
abstract int getAudioPlayoutVolume ()
 
abstract void enableAudioVolumeEvaluation (int interval)
 
abstract int startAudioRecording (TRTCCloudDef.TRTCAudioRecordingParams param)
 
abstract void stopAudioRecording ()
 
abstract void startLocalRecording (TRTCCloudDef.TRTCLocalRecordingParams params)
 
abstract void stopLocalRecording ()
 
abstract int checkAudioCapabilitySupport (int capabilityType)
 

Device management APIs

abstract TXDeviceManager getDeviceManager ()
 

Beauty filter and watermark APIs

abstract TXBeautyManager getBeautyManager ()
 
abstract void setWatermark (Bitmap image, int streamType, float x, float y, float width)
 

Background music and sound effect APIs

abstract TXAudioEffectManager getAudioEffectManager ()
 

Screen sharing APIs

abstract void startScreenCapture (int streamType, TRTCCloudDef.TRTCVideoEncParam encParams, TRTCCloudDef.TRTCScreenShareParams shareParams)
 
abstract void stopScreenCapture ()
 
abstract void pauseScreenCapture ()
 
abstract void resumeScreenCapture ()
 
abstract void setSubStreamEncoderParam (TRTCCloudDef.TRTCVideoEncParam param)
 

Custom capturing and rendering APIs

abstract void enableCustomVideoCapture (int streamType, boolean enable)
 
abstract void sendCustomVideoData (int streamType, TRTCCloudDef.TRTCVideoFrame frame)
 
abstract void enableCustomAudioCapture (boolean enable)
 
abstract void sendCustomAudioData (TRTCCloudDef.TRTCAudioFrame frame)
 
abstract void enableMixExternalAudioFrame (boolean enablePublish, boolean enablePlayout)
 
abstract int mixExternalAudioFrame (TRTCCloudDef.TRTCAudioFrame frame)
 
abstract void setMixExternalAudioVolume (int publishVolume, int playoutVolume)
 
abstract long generateCustomPTS ()
 
abstract int setLocalVideoProcessListener (int pixelFormat, int bufferType, TRTCCloudListener.TRTCVideoFrameListener listener)
 
abstract int setLocalVideoRenderListener (int pixelFormat, int bufferType, TRTCCloudListener.TRTCVideoRenderListener listener)
 
abstract int setRemoteVideoRenderListener (String userId, int pixelFormat, int bufferType, TRTCCloudListener.TRTCVideoRenderListener listener)
 
abstract void setAudioFrameListener (TRTCCloudListener.TRTCAudioFrameListener listener)
 
abstract int setCapturedRawAudioFrameCallbackFormat (TRTCCloudDef.TRTCAudioFrameCallbackFormat format)
 
abstract int setLocalProcessedAudioFrameCallbackFormat (TRTCCloudDef.TRTCAudioFrameCallbackFormat format)
 
abstract int setMixedPlayAudioFrameCallbackFormat (TRTCCloudDef.TRTCAudioFrameCallbackFormat format)
 
abstract void enableCustomAudioRendering (boolean enable)
 
abstract void getCustomAudioRenderingFrame (final TRTCCloudDef.TRTCAudioFrame audioFrame)
 

Custom message sending APIs

abstract boolean sendCustomCmdMsg (int cmdID, byte[] data, boolean reliable, boolean ordered)
 
abstract boolean sendSEIMsg (byte[] data, int repeatCount)
 

Network test APIs

abstract void startSpeedTest (int sdkAppId, String userId, String userSig)
 
abstract void stopSpeedTest ()
 

Debugging APIs

static String getSDKVersion ()
 
static void setLogLevel (int level)
 
static void setConsoleEnabled (boolean enabled)
 
static void setLogCompressEnabled (boolean enabled)
 
static void setLogDirPath (String path)
 
static void setLogListener (final TRTCCloudListener.TRTCLogListener logListener)
 
static native void setNetEnv (int env)
 
abstract void showDebugView (int showType)
 
abstract void setDebugViewMargin (String userId, TRTCViewMargin margin)
 
abstract void callExperimentalAPI (String jsonStr)
 

Disused APIs (the corresponding new APIs are recommended)

abstract void setMicVolumeOnMixing (int volume)
 
abstract void setBeautyStyle (int beautyStyle, int beautyLevel, int whitenessLevel, int ruddinessLevel)
 
abstract void setEyeScaleLevel (int eyeScaleLevel)
 
abstract void setFaceSlimLevel (int faceScaleLevel)
 
abstract void setFaceVLevel (int faceVLevel)
 
abstract void setChinLevel (int chinLevel)
 
abstract void setFaceShortLevel (int faceShortlevel)
 
abstract void setNoseSlimLevel (int noseSlimLevel)
 
abstract void selectMotionTmpl (String motionPath)
 
abstract void setMotionMute (boolean motionMute)
 
abstract void setFilter (Bitmap image)
 
abstract void setFilterConcentration (float concentration)
 
abstract boolean setGreenScreenFile (String file)
 
abstract void playBGM (String path, BGMNotify notify)
 
abstract void stopBGM ()
 
abstract void pauseBGM ()
 
abstract void resumeBGM ()
 
abstract int getBGMDuration (String path)
 
abstract int setBGMPosition (int pos)
 
abstract void setBGMVolume (int volume)
 
abstract void setBGMPlayoutVolume (int volume)
 
abstract void setBGMPublishVolume (int volume)
 
abstract void setReverbType (int reverbType)
 
abstract boolean setVoiceChangerType (int voiceChangerType)
 
abstract void playAudioEffect (TRTCCloudDef.TRTCAudioEffectParam effect)
 
abstract void setAudioEffectVolume (int effectId, int volume)
 
abstract void stopAudioEffect (int effectId)
 
abstract void stopAllAudioEffects ()
 
abstract void setAllAudioEffectsVolume (int volume)
 
abstract void pauseAudioEffect (int effectId)
 
abstract void resumeAudioEffect (int effectId)
 
abstract void enableAudioEarMonitoring (boolean enable)
 
abstract void startRemoteView (String userId, TXCloudVideoView view)
 
abstract void stopRemoteView (String userId)
 
abstract void setRemoteViewFillMode (String userId, int mode)
 
abstract void setRemoteViewRotation (String userId, int rotation)
 
abstract void setLocalViewFillMode (int mode)
 
abstract void setLocalViewRotation (int rotation)
 
abstract void setLocalViewMirror (int mirrorType)
 
abstract void startRemoteSubStreamView (String userId, TXCloudVideoView view)
 
abstract void stopRemoteSubStreamView (String userId)
 
abstract void setRemoteSubStreamViewFillMode (String userId, int mode)
 
abstract void setRemoteSubStreamViewRotation (final String userId, final int rotation)
 
abstract int setPriorRemoteVideoStreamType (int streamType)
 
abstract void setAudioQuality (int quality)
 
abstract void startLocalAudio ()
 
abstract void switchCamera ()
 
abstract boolean isCameraZoomSupported ()
 
abstract void setZoom (int distance)
 
abstract boolean isCameraTorchSupported ()
 
abstract boolean enableTorch (boolean enable)
 
abstract boolean isCameraFocusPositionInPreviewSupported ()
 
abstract void setFocusPosition (int x, int y)
 
abstract boolean isCameraAutoFocusFaceModeSupported ()
 
abstract void setSystemVolumeType (int type)
 
abstract void enableCustomVideoCapture (boolean enable)
 
abstract void sendCustomVideoData (TRTCCloudDef.TRTCVideoFrame frame)
 
abstract void startScreenCapture (TRTCCloudDef.TRTCVideoEncParam encParams, TRTCCloudDef.TRTCScreenShareParams shareParams)
 
abstract void muteLocalVideo (boolean mute)
 
abstract void muteRemoteVideoStream (String userId, boolean mute)
 

Member Function Documentation

◆ callExperimentalAPI()

abstract void callExperimentalAPI ( String  jsonStr)
abstract

Call experimental APIs

◆ checkAudioCapabilitySupport()

abstract int checkAudioCapabilitySupport ( int  capabilityType)
abstract

Query whether a certain audio capability is supported (only for Android)

Parameters
capabilityTypeAudio capability type.
Returns
0:supported;1:supported。

◆ ConnectOtherRoom()

abstract void ConnectOtherRoom ( String  param)
abstract

Request cross-room call

By default, only users in the same room can make audio/video calls with each other, and the audio/video streams in different rooms are isolated from each other. However, you can publish the audio/video streams of an anchor in another room to the current room by calling this API. At the same time, this API will also publish the local audio/video streams to the target anchor's room. In other words, you can use this API to share the audio/video streams of two anchors in two different rooms, so that the audience in each room can watch the streams of these two anchors. This feature can be used to implement anchor competition. The result of requesting cross-room call will be returned through the onConnectOtherRoom() callback in TRTCCloudDelegate. For example, after anchor A in room "101" uses connectOtherRoom() to successfully call anchor B in room "102":

  • All users in room "101" will receive the onRemoteUserEnterRoom(B) and onUserVideoAvailable(B,YES) event callbacks of anchor B; that is, all users in room "101" can subscribe to the audio/video streams of anchor B.
  • All users in room "102" will receive the onRemoteUserEnterRoom(A) and onUserVideoAvailable(A,YES) event callbacks of anchor A; that is, all users in room "102" can subscribe to the audio/video streams of anchor A.

                                      Room 101                          Room 102
                                ---------------------               ---------------------
     Before cross-room call:   | Anchor:     A       |             | Anchor:     B       |
                               | Users :   U, V, W   |             | Users:   X, Y, Z    |
                                ---------------------               ---------------------
                                      Room 101                           Room 102
                                ---------------------               ---------------------
     After cross-room call:    | Anchors: A and B    |             | Anchors: B and A    |
                               | Users  : U, V, W    |             | Users  : X, Y, Z    |
                                ---------------------               ---------------------
    

    For compatibility with subsequent extended fields for cross-room call, parameters in JSON format are used currently. Case 1: numeric room ID If anchor A in room "101" wants to co-anchor with anchor B in room "102", then anchor A needs to pass in {"roomId": 102, "userId": "userB"} when calling this API. Below is the sample code:

      JSONObject jsonObj = new JSONObject();
      jsonObj.put("roomId", 102);
      jsonObj.put("userId", "userB");
      trtc.ConnectOtherRoom(jsonObj.toString());
    

Case 2: string room ID If you use a string room ID, please be sure to replace the roomId in JSON with strRoomId, such as {"strRoomId": "102", "userId": "userB"} Below is the sample code:

  JSONObject jsonObj = new JSONObject();
  jsonObj.put("strRoomId", "102");
  jsonObj.put("userId", "userB");
  trtc.ConnectOtherRoom(jsonObj.toString());
Parameters
paramYou need to pass in a string parameter in JSON format: roomId represents the room ID in numeric format, strRoomId represents the room ID in string format, and userId represents the user ID of the target anchor.

◆ createSubCloud()

abstract TRTCCloud createSubCloud ( )
abstract

Create room subinstance (for concurrent multi-room listen/watch)

TRTCCloud was originally designed to work in the singleton mode, which limited the ability to watch concurrently in multiple rooms. By calling this API, you can create multiple TRTCCloud instances, so that you can enter multiple different rooms at the same time to listen/watch audio/video streams. However, it should be noted that because there are still only one camera and one mic available, you can exist as an "anchor" in only one TRTCCloud instance at any time; that is, you can only publish your audio/video streams in one TRTCCloud instance at any time. This feature is mainly used in the "super small class" use case in the online education scenario to break the limit that "only up to 50 users can publish their audio/video streams simultaneously in one TRTC room". Below is the sample code:

 TRTCCloud mainCloud = TRTCCloud.sharedInstance(mContext); mainCloud.enterRoom(params1, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    //...
    //Switch the role from "anchor" to "audience" in your own room
    mainCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);
    mainCloud.muteLocalVideo(true);
    mainCloud.muteLocalAudio(true);
    //...
    //Use subcloud to enter another room and switch the role from "audience" to "anchor"
    TRTCCloud subCloud = mainCloud.createSubCloud();
    subCloud.enterRoom(params2, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    subCloud.switchRole(TRTCCloudDef.TRTCRoleAnchor);
    subCloud.muteLocalVideo(false);
    subCloud.muteLocalAudio(false);
    //...
    //Exit from new room and release it.
    subCloud.exitRoom();
    mainCloud.destroySubCloud(subCloud);
Attention
  • The same user can enter multiple rooms with different roomId values by using the same userId.
  • Two devices cannot use the same userId to enter the same room with a specified roomId.
  • The same user can push a stream in only one TRTCCloud instance at any time. If streams are pushed simultaneously in different rooms, a status mess will be caused in the cloud, leading to various bugs.
  • The TRTCCloud instance created by the createSubCloud API cannot call APIs related to the local audio/video in the subinstance, except switchRole, muteLocalVideo, and muteLocalAudio. To use APIs such as the beauty filter, please use the original TRTCCloud instance object.
Returns
TRTCCloud subinstance

◆ destroySharedInstance()

static void destroySharedInstance ( )
inlinestatic

Terminate TRTCCloud instance (singleton mode)

◆ destroySubCloud()

abstract void destroySubCloud ( final TRTCCloud  subCloud)
abstract

Terminate room subinstance

Parameters
subCloud

◆ DisconnectOtherRoom()

abstract void DisconnectOtherRoom ( )
abstract

Exit cross-room call

The result will be returned through the onDisconnectOtherRoom() callback in TRTCCloudDelegate.

◆ enableAudioEarMonitoring()

abstract void enableAudioEarMonitoring ( boolean  enable)
abstract

Enable or disable in-ear monitoring

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setVoiceEarMonitor instead.

◆ enableAudioVolumeEvaluation()

abstract void enableAudioVolumeEvaluation ( int  interval)
abstract

Enable volume reminder

After this feature is enabled, the SDK will return the remote audio volume in the onUserVoiceVolume callback of TRTCCloudDelegate.

Attention
To enable this feature, call this API before calling startLocalAudio.
Parameters
intervalSet the interval in ms for triggering the onUserVoiceVolume callback. The minimum interval is 100 ms. If the value is smaller than or equal to 0, the callback will be disabled. We recommend you set this parameter to 300 ms.

◆ enableCustomAudioCapture()

abstract void enableCustomAudioCapture ( boolean  enable)
abstract

Enable custom audio capturing mode

After this mode is enabled, the SDK will not run the original audio capturing process (i.e., stopping mic data capturing) and will retain only the audio encoding and sending capabilities. You need to use sendCustomAudioData to continuously insert the captured audio data into the SDK.

Parameters
enableWhether to enable. Default value: false
Attention
As acoustic echo cancellation (AEC) requires strict control over the audio capturing and playback time, after custom audio capturing is enabled, AEC may fail.

◆ enableCustomAudioRendering()

abstract void enableCustomAudioRendering ( boolean  enable)
abstract

Enabling custom audio playback

You can use this API to enable custom audio playback if you want to connect to an external audio device or control the audio playback logic by yourself. After you enable custom audio playback, the SDK will stop using its audio API to play back audio. You need to call getCustomAudioRenderingFrame to get audio frames and play them by yourself.

Parameters
enableWhether to enable custom audio playback. It’s disabled by default.
Attention
The parameter must be set before room entry to take effect.

◆ enableCustomVideoCapture() [1/2]

abstract void enableCustomVideoCapture ( boolean  enable)
abstract

Enable custom video capturing mode

Deprecated:
This API is not recommended after v8.5. Please use enableCustomVideoCapture(streamType,enable) instead.

◆ enableCustomVideoCapture() [2/2]

abstract void enableCustomVideoCapture ( int  streamType,
boolean  enable 
)
abstract

Enable/Disable custom video capturing mode

After this mode is enabled, the SDK will not run the original video capturing process (i.e., stopping camera data capturing and beauty filter operations) and will retain only the video encoding and sending capabilities. You need to use sendCustomVideoData to continuously insert the captured video image into the SDK.

Parameters
streamTypeSpecify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image).
enableWhether to enable. Default value: false

◆ enableEncSmallVideoStream()

abstract int enableEncSmallVideoStream ( boolean  enable,
TRTCCloudDef.TRTCVideoEncParam  smallVideoEncParam 
)
abstract

Enable dual-channel encoding mode with big and small images

In this mode, the current user's encoder will output two channels of video streams, i.e., HD big image and Smooth small image, at the same time (only one channel of audio stream will be output though). In this way, other users in the room can choose to subscribe to the HD big image or Smooth small image according to their own network conditions or screen size.

Attention
Dual-channel encoding will consume more CPU resources and network bandwidth; therefore, this feature can be enabled on macOS, Windows, or high-spec tablets, but is not recommended for phones.
Parameters
enableWhether to enable small image encoding. Default value: false
smallVideoEncParamVideo parameters of small image stream
Returns
0: success; -1: the current big image has been set to a lower quality, and it is not necessary to enable dual-channel encoding

◆ enableMixExternalAudioFrame()

abstract void enableMixExternalAudioFrame ( boolean  enablePublish,
boolean  enablePlayout 
)
abstract

Enable/Disable custom audio track

After this feature is enabled, you can mix a custom audio track into the SDK through this API. With two boolean parameters, you can control whether to play back this track remotely or locally.

Parameters
enablePublishWhether the mixed audio track should be played back remotely. Default value: false
enablePlayoutWhether the mixed audio track should be played back locally. Default value: false
Attention
If you specify both enablePublish and enablePlayout as false, the custom audio track will be completely closed.

◆ enableTorch()

abstract boolean enableTorch ( boolean  enable)
abstract

Enable/Disable flash

Deprecated:
This API is not recommended after v8.0. Please use the enableCameraTorch API in TXDeviceManager instead.

◆ enterRoom()

abstract void enterRoom ( TRTCCloudDef.TRTCParams  param,
int  scene 
)
abstract

Enter room

All TRTC users need to enter a room before they can "publish" or "subscribe to" audio/video streams. "Publishing" refers to pushing their own streams to the cloud, and "subscribing to" refers to pulling the streams of other users in the room from the cloud. When calling this API, you need to specify your application scenario (TRTCAppScene) to get the best audio/video transfer experience. We provide the following four scenarios for your choice:

  • TRTCAppSceneVideoCall: Video call scenario. Use cases: [one-to-one video call], [video conferencing with up to 300 participants], [online medical diagnosis], [small class], [video interview], etc. In this scenario, each room supports up to 300 concurrent online users, and up to 50 of them can speak simultaneously.
  • TRTCAppSceneAudioCall: Audio call scenario. Use cases: [one-to-one audio call], [audio conferencing with up to 300 participants], [audio chat], [online Werewolf], etc. In this scenario, each room supports up to 300 concurrent online users, and up to 50 of them can speak simultaneously.
  • TRTCAppSceneLIVE: Live streaming scenario. Use cases: [low-latency video live streaming], [interactive classroom for up to 100,000 participants], [live video competition], [video dating room], [remote training], [large-scale conferencing], etc. In this scenario, each room supports up to 100,000 concurrent online users, but you should specify the user roles: anchor (TRTCRoleAnchor) or audience (TRTCRoleAudience).
  • TRTCAppSceneVoiceChatRoom: Audio chat room scenario. Use cases: [Clubhouse], [online karaoke room], [music live room], [FM radio], etc. In this scenario, each room supports up to 100,000 concurrent online users, but you should specify the user roles: anchor (TRTCRoleAnchor) or audience (TRTCRoleAudience). After calling this API, you will receive the onEnterRoom(result) callback from TRTCCloudDelegate:
    • If room entry succeeded, the result parameter will be a positive number (result > 0), indicating the time in milliseconds (ms) between function call and room entry.
    • If room entry failed, the result parameter will be a negative number (result < 0), indicating the error code for room entry failure.
      Parameters
      paramRoom entry parameter, which is used to specify the user's identity, role, authentication credentials, and other information. For more information, please see TRTCParams.
      sceneApplication scenario, which is used to specify the use case. The same TRTCAppScene should be configured for all users in the same room.
      Attention
      1. If scene is specified as TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom, you must use the role field in TRTCParams to specify the role of the current user in the room.
      2. The same scene should be configured for all users in the same room.
      3. Please try to ensure that enterRoom and exitRoom are used in pair; that is, please make sure that "the previous room is exited before the next room is entered"; otherwise, many issues may occur.

◆ exitRoom()

abstract void exitRoom ( )
abstract

Exit room

Calling this API will allow the user to leave the current audio or video room and release the camera, mic, speaker, and other device resources. After resources are released, the SDK will use the onExitRoom() callback in TRTCCloudDelegate to notify you. If you need to call enterRoom again or switch to the SDK of another provider, we recommend you wait until you receive the onExitRoom() callback, so as to avoid the problem of the camera or mic being occupied.

◆ generateCustomPTS()

abstract long generateCustomPTS ( )
abstract

Generate custom capturing timestamp

This API is only suitable for the custom capturing mode and is used to solve the problem of out-of-sync audio/video caused by the inconsistency between the capturing time and delivery time of audio/video frames. When you call APIs such as sendCustomVideoData or sendCustomAudioData for custom video or audio capturing, please use this API as instructed below:

  1. First, when a video or audio frame is captured, call this API to get the corresponding PTS timestamp.
  2. Then, send the video or audio frame to the preprocessing module you use (such as a third-party beauty filter or sound effect component).
  3. When you actually call sendCustomVideoData or sendCustomAudioData for delivery, assign the PTS timestamp recorded when the frame was captured to the timestamp field in TRTCVideoFrame or TRTCAudioFrame.
Returns
Timestamp in ms

◆ getAudioCaptureVolume()

abstract int getAudioCaptureVolume ( )
abstract

Get the capturing volume of local audio

◆ getAudioEffectManager()

abstract TXAudioEffectManager getAudioEffectManager ( )
abstract

Get sound effect management class (TXAudioEffectManager)

TXAudioEffectManager is a sound effect management API, through which you can implement the following features:

  • Background music: both online music and local music can be played back with various features such as speed adjustment, pitch adjustment, original voice, accompaniment, and loop.
  • In-ear monitoring: the sound captured by the mic is played back in the headphones in real time, which is generally used for music live streaming.
  • Reverb effect: karaoke room, small room, big hall, deep, resonant, and other effects.
  • Voice changing effect: young girl, middle-aged man, heavy metal, and other effects.
  • Short sound effect: short sound effect files such as applause and laughter are supported (for files less than 10 seconds in length, please set the isShortFile parameter to true).

◆ getAudioPlayoutVolume()

abstract int getAudioPlayoutVolume ( )
abstract

Get the playback volume of remote audio

◆ getBeautyManager()

abstract TXBeautyManager getBeautyManager ( )
abstract

Get beauty filter management class (TXBeautyManager)

You can use the following features with beauty filter management:

  • Set beauty effects such as "skin smoothing", "brightening", and "rosy skin".
  • Set face adjustment effects such as "eye enlarging", "face slimming", "chin slimming", "chin lengthening/shortening", "face shortening", "nose narrowing", "eye brightening", "teeth whitening", "eye bag removal", "wrinkle removal", and "smile line removal".
  • Set face adjustment effects such as "hairline", "eye distance", "eye corners", "mouth shape", "nose wing", "nose position", "lip thickness", and "face shape".
  • Set makeup effects such as "eye shadow" and "blush".
  • Set animated effects such as animated sticker and facial pendant.

◆ getBGMDuration()

abstract int getBGMDuration ( String  path)
abstract

Get the total length of background music in ms

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#getMusicDurationInMS instead.

◆ getCustomAudioRenderingFrame()

abstract void getCustomAudioRenderingFrame ( final TRTCCloudDef.TRTCAudioFrame  audioFrame)
abstract

Getting playable audio data

Before calling this API, you need to first enable custom audio playback using enableCustomAudioRendering. Fill the fields in TRTCAudioFrame as follows (other fields are not required):

  • sampleRate: sample rate (required). Valid values: 16000, 24000, 32000, 44100, 48000
  • channel: number of sound channels (required). 1: mono-channel; 2: dual-channel; if dual-channel is used, data is interleaved.
  • data: the buffer used to get audio data. You need to allocate memory for the buffer based on the duration of an audio frame. The PCM data obtained can have a frame duration of 10 ms or 20 ms. 20 ms is recommended. Assume that the sample rate is 48000, and sound channels mono-channel. The buffer size for a 20 ms audio frame would be 48000 x 0.02s x 1 x 16 bit = 15360 bit = 1920 bytes.
Parameters
audioFrameAudio frames
Attention
  1. You must set sampleRate and channel in audioFrame, and allocate memory for one frame of audio in advance.
  2. The SDK will fill the data automatically based on sampleRate and channel.
  3. We recommend that you use the system’s audio playback thread to drive the calling of this API, so that it is called each time the playback of an audio frame is complete.

◆ getDeviceManager()

abstract TXDeviceManager getDeviceManager ( )
abstract

Get device management class (TXDeviceManager)

◆ getSDKVersion()

static String getSDKVersion ( )
inlinestatic

Get SDK version information

◆ isCameraAutoFocusFaceModeSupported()

abstract boolean isCameraAutoFocusFaceModeSupported ( )
abstract

Query whether the device supports the automatic recognition of face position

Deprecated:
This API is not recommended after v8.0. Please use the isAutoFocusEnabled API in TXDeviceManager instead.

◆ isCameraFocusPositionInPreviewSupported()

abstract boolean isCameraFocusPositionInPreviewSupported ( )
abstract

Query whether the camera supports setting focus

Deprecated:
This API is not recommended after v8.0.

◆ isCameraTorchSupported()

abstract boolean isCameraTorchSupported ( )
abstract

Query whether the device supports flash

Deprecated:
This API is not recommended after v8.0. Please use the isCameraTorchSupported API in TXDeviceManager instead.

◆ isCameraZoomSupported()

abstract boolean isCameraZoomSupported ( )
abstract

Query whether the current camera supports zoom

Deprecated:
This API is not recommended after v8.0. Please use the isCameraZoomSupported API in TXDeviceManager instead.

◆ mixExternalAudioFrame()

abstract int mixExternalAudioFrame ( TRTCCloudDef.TRTCAudioFrame  frame)
abstract

Mix custom audio track into SDK

Before you use this API to mix custom PCM audio into the SDK, you need to first enable custom audio tracks through enableMixExternalAudioFrame. You are expected to feed audio data into the SDK at an even pace, but we understand that it can be challenging to call an API at absolutely regular intervals. Given this, we have provided a buffer pool in the SDK, which can cache the audio data you pass in to reduce the fluctuations in intervals between API calls. The value returned by this API indicates the size (ms) of the buffer pool. For example, if 50 is returned, it indicates that the buffer pool has 50 ms of audio data. As long as you call this API again within 50 ms, the SDK can make sure that continuous audio data is mixed. If the value returned is 100 or greater, you can wait after an audio frame is played to call the API again. If the value returned is smaller than 100, then there isn’t enough data in the buffer pool, and you should feed more audio data into the SDK until the data in the buffer pool is above the safety level. Fill the fields in TRTCAudioFrame as follows (other fields are not required).

  • data: audio frame buffer. Audio frames must be in PCM format. Each frame can be 5-100 ms (20 ms is recommended) in duration. Assume that the sample rate is 48000, and sound channels mono-channel. Then the frame size would be 48000 x s x 1 x 16 bit = 15360 bit = 1920 bytes.
  • sampleRate: sample rate. Valid values: 16000, 24000, 32000, 44100, 48000
  • channel: number of sound channels (if dual-channel is used, data is interleaved). Valid values: 1 (mono-channel); 2 (dual channel)
  • timestamp: timestamp (ms). Set it to the timestamp when audio frames are captured, which you can obtain by calling generateCustomPTS after getting an audio frame.
Parameters
frameAudio data
Returns
If the value returned is 0 or greater, the value represents the current size of the buffer pool; if the value returned is smaller than 0, it means that an error occurred. -1 indicates that you didn’t call {} to enable custom audio tracks.

◆ muteAllRemoteAudio()

abstract void muteAllRemoteAudio ( boolean  mute)
abstract

Pause/Resume playing back all remote users' audio streams

When you mute the audio of all remote users, the SDK will stop playing back all their audio streams and pulling all their audio data.

Parameters
mutetrue: mute; false: unmute
Attention
This API works when called either before or after room entry (enterRoom), and the mute status will be reset to false after room exit (exitRoom).

◆ muteAllRemoteVideoStreams()

abstract void muteAllRemoteVideoStreams ( boolean  mute)
abstract

Pause/Resume subscribing to all remote users' video streams

This API only pauses/resumes receiving all users' video streams but does not release displaying resources; therefore, the video image will freeze at the last frame before it is called.

Parameters
muteWhether to pause receiving
Attention
This API can be called before room entry (enterRoom), and the pause status will be reset after room exit (exitRoom).

◆ muteLocalAudio()

abstract void muteLocalAudio ( boolean  mute)
abstract

Pause/Resume publishing local audio stream

After local audio publishing is paused, other users in the room will receive the onUserAudioAvailable(userId, false) notification. After local audio publishing is resumed, other users in the room will receive the onUserAudioAvailable(userId, true) notification. Different from stopLocalAudio, muteLocalAudio(true) does not release the mic permission; instead, it continues to send mute packets with extremely low bitrate. This is very suitable for scenarios that require on-cloud recording, as video file formats such as MP4 have a high requirement for audio continuity, while an MP4 recording file cannot be played back smoothly if stopLocalAudio is used. Therefore, muteLocalAudio instead of stopLocalAudio is recommended in scenarios where the requirement for recording file quality is high.

Parameters
mutetrue: mute; false: unmute

◆ muteLocalVideo() [1/2]

abstract void muteLocalVideo ( boolean  mute)
abstract

Pause/Resume publishing local video stream

Deprecated:
This API is not recommended after v8.9. Please use muteLocalVideo(streamType, mute) instead.

◆ muteLocalVideo() [2/2]

abstract void muteLocalVideo ( int  streamType,
boolean  mute 
)
abstract

Pause/Resume publishing local video stream

This API can pause (or resume) publishing the local video image. After the pause, other users in the same room will not be able to see the local image. This API is equivalent to the two APIs of startLocalPreview/stopLocalPreview when TRTCVideoStreamTypeBig is specified, but has higher performance and response speed. The startLocalPreview/stopLocalPreview APIs need to enable/disable the camera, which are hardware device-related operations, so they are very time-consuming. In contrast, muteLocalVideo only needs to pause or allow the data stream at the software level, so it is more efficient and more suitable for scenarios where frequent enabling/disabling are needed. After local video publishing is paused, other members in the same room will receive the onUserVideoAvailable(userId, false) callback notification. After local video publishing is resumed, other members in the same room will receive the onUserVideoAvailable(userId, true) callback notification.

Parameters
streamTypeSpecify for which video stream to pause (or resume). Only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported
mutetrue: pause; false: resume

◆ muteRemoteAudio()

abstract void muteRemoteAudio ( String  userId,
boolean  mute 
)
abstract

Pause/Resume playing back remote audio stream

When you mute the remote audio of a specified user, the SDK will stop playing back the user's audio and pulling the user's audio data.

Parameters
userIdID of the specified remote user
mutetrue: mute; false: unmute
Attention
This API works when called either before or after room entry (enterRoom), and the mute status will be reset to false after room exit (exitRoom).

◆ muteRemoteVideoStream() [1/2]

abstract void muteRemoteVideoStream ( String  userId,
boolean  mute 
)
abstract

Pause/Resume subscribing to remote user's video stream

Deprecated:
This API is not recommended after v8.9. Please use muteRemoteVideoStream(userId, streamType, mute) instead.

◆ muteRemoteVideoStream() [2/2]

abstract void muteRemoteVideoStream ( String  userId,
int  streamType,
boolean  mute 
)
abstract

Pause/Resume subscribing to remote user's video stream

This API only pauses/resumes receiving the specified user's video stream but does not release displaying resources; therefore, the video image will freeze at the last frame before it is called.

Parameters
userIdID of the specified remote user
streamTypeSpecify for which video stream to pause (or resume). Only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported
muteWhether to pause receiving
Attention
This API can be called before room entry (enterRoom), and the pause status will be reset after room exit (exitRoom).

◆ pauseAudioEffect()

abstract void pauseAudioEffect ( int  effectId)
abstract

Pause sound effect

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#pauseAudioEffect instead.

◆ pauseBGM()

abstract void pauseBGM ( )
abstract

Stop background music

Deprecated:
This API is not recommended after v7.3. Please use getAudioEffectManager instead.

◆ pauseScreenCapture()

abstract void pauseScreenCapture ( )
abstract

Pause screen sharing

◆ playAudioEffect()

abstract void playAudioEffect ( TRTCCloudDef.TRTCAudioEffectParam  effect)
abstract

Play sound effect

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#startPlayMusic instead.

◆ playBGM()

abstract void playBGM ( String  path,
BGMNotify  notify 
)
abstract

Start background music

Deprecated:
This API is not recommended after v7.3. Please use getAudioEffectManager instead.

◆ resumeAudioEffect()

abstract void resumeAudioEffect ( int  effectId)
abstract

Pause sound effect

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#resumePlayMusic instead.

◆ resumeBGM()

abstract void resumeBGM ( )
abstract

Stop background music

Deprecated:
This API is not recommended after v7.3. Please use getAudioEffectManager instead.

◆ resumeScreenCapture()

abstract void resumeScreenCapture ( )
abstract

Resume screen sharing

◆ selectMotionTmpl()

abstract void selectMotionTmpl ( String  motionPath)
abstract

Set animated sticker

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ sendCustomAudioData()

abstract void sendCustomAudioData ( TRTCCloudDef.TRTCAudioFrame  frame)
abstract

Deliver captured audio data to SDK

We recommend you enter the following information for the TRTCAudioFrame parameter (other fields can be left empty):

  • audioFormat: audio data format, which can only be TRTCAudioFrameFormatPCM.
  • data: audio frame buffer. Audio frame data must be in PCM format, and it supports a frame length of 5–100 ms (20 ms is recommended). Length calculation method: for example, if the sample rate is 48000, then the frame length for mono channel will be 48000 *s * 1 * 16 bit = 15360 bit = 1920 bytes.
  • sampleRate: sample rate. Valid values: 16000, 24000, 32000, 44100, 48000.
  • channel: number of channels (if stereo is used, data is interwoven). Valid values: 1: mono channel; 2: dual channel.
  • timestamp (ms): Set it to the timestamp when audio frames are captured, which you can obtain by calling generateCustomPTS after getting a audio frame.

For more information, please see Custom Capturing and Rendering.

Parameters
frameAudio data
Attention
Please call this API accurately at intervals of the frame length; otherwise, sound lag may occur due to uneven data delivery intervals.

◆ sendCustomCmdMsg()

abstract boolean sendCustomCmdMsg ( int  cmdID,
byte[]  data,
boolean  reliable,
boolean  ordered 
)
abstract

Use UDP channel to send custom message to all users in room

This API allows you to use TRTC's UDP channel to broadcast custom data to other users in the current room for signaling transfer. The UDP channel in TRTC was originally designed to transfer audio/video data. This API works by disguising the signaling data you want to send as audio/video data packets and sending them together with the audio/video data to be sent. Other users in the room can receive the message through the onRecvCustomCmdMsg callback in TRTCCloudDelegate.

Parameters
cmdIDMessage ID. Value range: 1–10
dataMessage to be sent. The maximum length of one single message is 1 KB.
reliableWhether reliable sending is enabled. Reliable sending can achieve a higher success rate but with a longer reception delay than unreliable sending.
orderedWhether orderly sending is enabled, i.e., whether the data packets should be received in the same order in which they are sent; if so, a certain delay will be caused.
Returns
true: sent the message successfully; false: failed to send the message.
Attention
  1. Up to 30 messages can be sent per second to all users in the room (this is not supported for web and mini program currently).
  2. A packet can contain up to 1 KB of data; if the threshold is exceeded, the packet is very likely to be discarded by the intermediate router or server.
  3. A client can send up to 8 KB of data in total per second.
  4. reliable and ordered must be set to the same value (true or false) and cannot be set to different values currently.
  5. We strongly recommend you set different cmdID values for messages of different types. This can reduce message delay when orderly sending is required.

◆ sendCustomVideoData() [1/2]

abstract void sendCustomVideoData ( int  streamType,
TRTCCloudDef.TRTCVideoFrame  frame 
)
abstract

Deliver captured video frames to SDK

You can use this API to deliver video frames you capture to the SDK, and the SDK will encode and transfer them through its own network module. There are two delivery schemes for Android:

  • Memory-based delivery scheme: its connection is easy but its performance is poor, so it is not suitable for scenarios with high resolution.
  • Video memory-based delivery scheme: its connection requires certain knowledge in OpenGL, but its performance is good. For resolution higher than 640x360, please use this scheme.

For more information, please see Custom Capturing and Rendering.

Parameters
streamTypeSpecify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image).
frameVideo data. If the memory-based delivery scheme is used, please set the data field; if the video memory-based delivery scheme is used, please set the TRTCTexture field. For more information, please see {TRTCVideoFrame}. We recommend you call the generateCustomPTS} API to get the timestamp value of a video frame immediately after capturing it, so as to achieve the best audio/video sync effect.The video frame rate eventually encoded by the SDK is not determined by the frequency at which you call this API, but by the FPS you set in setVideoEncoderParam}.Please try to keep the calling interval of this API even; otherwise, problems will be caused, such as unstable output frame rate of the encoder or out-of-sync audio/video.

◆ sendCustomVideoData() [2/2]

abstract void sendCustomVideoData ( TRTCCloudDef.TRTCVideoFrame  frame)
abstract

Deliver captured video data to SDK

Deprecated:
This API is not recommended after v8.5. Please use sendCustomVideoData(streamType, TRTCVideoFrame) instead.

◆ sendSEIMsg()

abstract boolean sendSEIMsg ( byte[]  data,
int  repeatCount 
)
abstract

Use SEI channel to send custom message to all users in room

This API allows you to use TRTC's SEI channel to broadcast custom data to other users in the current room for signaling transfer. The header of a video frame has a header data block called SEI. This API works by embedding the custom signaling data you want to send in the SEI block and sending it together with the video frame. Therefore, the SEI channel has a better compatibility than sendCustomCmdMsg as the signaling data can be transferred to the CSS CDN along with the video frame. However, because the data block of the video frame header cannot be too large, we recommend you limit the size of the signaling data to only a few bytes when using this API. The most common use is to embed the custom timestamp into video frames through this API so as to implement a perfect alignment between the message and video image (such as between the teaching material and video signal in the education scenario). Other users in the room can receive the message through the onRecvSEIMsg callback in TRTCCloudDelegate.

Parameters
dataData to be sent, which can be up to 1 KB (1,000 bytes)
repeatCountData sending count
Returns
YES: the message is allowed and will be sent with subsequent video frames; NO: the message is not allowed to be sent
Attention
This API has the following restrictions:
  1. The data will not be instantly sent after this API is called; instead, it will be inserted into the next video frame after the API call.
  2. Up to 30 messages can be sent per second to all users in the room (this limit is shared with sendCustomCmdMsg).
  3. Each packet can be up to 1 KB (this limit is shared with sendCustomCmdMsg). If a large amount of data is sent, the video bitrate will increase, which may reduce the video quality or even cause lagging.
  4. Each client can send up to 8 KB of data in total per second (this limit is shared with sendCustomCmdMsg).
  5. If multiple times of sending is required (i.e., repeatCount > 1), the data will be inserted into subsequent repeatCount video frames in a row for sending, which will increase the video bitrate.
  6. If repeatCount is greater than 1, the data will be sent for multiple times, and the same message may be received multiple times in the onRecvSEIMsg callback; therefore, deduplication is required.

◆ setAllAudioEffectsVolume()

abstract void setAllAudioEffectsVolume ( int  volume)
abstract

Set the volume of all sound effects

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setMusicPublishVolume and TXAudioEffectManager#setMusicPlayoutVolume instead.

◆ setAudioCaptureVolume()

abstract void setAudioCaptureVolume ( int  volume)
abstract

Set the capturing volume of local audio

Parameters
volumeVolume. 100 is the original volume. Value range: [0,150]. Default value: 100
Attention
If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.

◆ setAudioEffectVolume()

abstract void setAudioEffectVolume ( int  effectId,
int  volume 
)
abstract

Set sound effect volume

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setMusicPublishVolume and TXAudioEffectManager#setMusicPlayoutVolume instead.

◆ setAudioFrameListener()

abstract void setAudioFrameListener ( TRTCCloudListener.TRTCAudioFrameListener  listener)
abstract

Set custom audio data callback

After this callback is set, the SDK will internally call back the audio data (in PCM format), including:

Attention
Setting the callback to null indicates to stop the custom audio callback, while setting it to a non-null value indicates to start the custom audio callback.

◆ setAudioPlayoutVolume()

abstract void setAudioPlayoutVolume ( int  volume)
abstract

Set the playback volume of remote audio

This API controls the volume of the sound ultimately delivered by the SDK to the system for playback. It affects the volume of the recorded local audio file but not the volume of in-ear monitoring.

Parameters
volumeVolume. 100 is the original volume. Value range: [0,150]. Default value: 100
Attention
If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.

◆ setAudioQuality()

abstract void setAudioQuality ( int  quality)
abstract

Set sound quality

Deprecated:
This API is not recommended after v8.0. Please use startLocalAudio:(quality) instead.

◆ setAudioRoute()

abstract void setAudioRoute ( int  route)
abstract

Set audio route

Setting "audio route" is to determine whether the sound is played back from the speaker or receiver of a mobile device; therefore, this API is only applicable to mobile devices such as phones. Generally, a phone has two speakers: one is the receiver at the top, and the other is the stereo speaker at the bottom. If audio route is set to the receiver, the volume is relatively low, and the sound can be heard clearly only when the phone is put near the ear. This mode has a high level of privacy and is suitable for answering calls. If audio route is set to the speaker, the volume is relatively high, so there is no need to put the phone near the ear. Therefore, this mode can implement the "hands-free" feature.

Parameters
routeAudio route, i.e., whether the audio is output by speaker or receiver. Default value: TRTCAudioModeSpeakerphone

◆ setBeautyStyle()

abstract void setBeautyStyle ( int  beautyStyle,
int  beautyLevel,
int  whitenessLevel,
int  ruddinessLevel 
)
abstract

Set the strength of beauty, brightening, and rosy skin filters

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setBGMPlayoutVolume()

abstract void setBGMPlayoutVolume ( int  volume)
abstract

Set the local playback volume of background music

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setMusicPlayoutVolume instead.

◆ setBGMPosition()

abstract int setBGMPosition ( int  pos)
abstract

Set background music playback progress

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#seekMusicToPosInMS instead.

◆ setBGMPublishVolume()

abstract void setBGMPublishVolume ( int  volume)
abstract

Set the remote playback volume of background music

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setBGMPublishVolume instead.

◆ setBGMVolume()

abstract void setBGMVolume ( int  volume)
abstract

Set background music volume

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setMusicVolume instead.

◆ setCapturedRawAudioFrameCallbackFormat()

abstract int setCapturedRawAudioFrameCallbackFormat ( TRTCCloudDef.TRTCAudioFrameCallbackFormat  format)
abstract

Set the callback format of original audio frames captured by local mic

This API is used to set the AudioFrame format called back by onCapturedRawAudioFrame:

  • sampleRate: sample rate. Valid values: 16000, 32000, 44100, 48000
  • channel: number of channels (if stereo is used, data is interwoven). Valid values: 1: mono channel; 2: dual channel
  • samplesPerCall: number of sample points, which defines the frame length of the callback data. The frame length must be an integer multiple of 10 ms.

If you want to calculate the callback frame length in milliseconds, the formula for converting the number of milliseconds into the number of sample points is as follows: number of sample points = number of milliseconds * sample rate / 1000 For example, if you want to call back the data of 20 ms frame length with 48000 sample rate, then the number of sample points should be entered as 960 = 20 * 48000 / 1000 Note that the frame length of the final callback is in bytes, and the calculation formula for converting the number of sample points into the number of bytes is as follows: number of bytes = number of sample points * number of channels * 2 (bit width) For example, if the parameters are 48000 sample rate, dual channel, 20 ms frame length, and 960 sample points, then the number of bytes is 3840 = 960 * 2 * 2

Parameters
formatAudio data callback format
Returns
0: success; values smaller than 0: error

◆ setChinLevel()

abstract void setChinLevel ( int  chinLevel)
abstract

Set the strength of chin lengthening/shortening filter

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setConsoleEnabled()

static void setConsoleEnabled ( boolean  enabled)
inlinestatic

Enable/Disable console log printing

Parameters
enabledSpecify whether to enable it, which is disabled by default

◆ setDebugViewMargin()

abstract void setDebugViewMargin ( String  userId,
TRTCViewMargin  margin 
)
abstract

◆ setDefaultStreamRecvMode()

abstract void setDefaultStreamRecvMode ( boolean  autoRecvAudio,
boolean  autoRecvVideo 
)
abstract

Set subscription mode (which must be set before room entry for it to take effect)

You can switch between the "automatic subscription" and "manual subscription" modes through this API:

  • Automatic subscription: this is the default mode, where the user will immediately receive the audio/video streams in the room after room entry, so that the audio will be automatically played back, and the video will be automatically decoded (you still need to bind the rendering control through the startRemoteView API).
  • Manual subscription: after room entry, the user needs to manually call the {@startRemoteView} API to start subscribing to and decoding the video stream and call the {@muteRemoteAudio} (false) API to start playing back the audio stream. In most scenarios, users will subscribe to the audio/video streams of all anchors in the room after room entry. Therefore, TRTC adopts the automatic subscription mode by default in order to achieve the best "instant streaming experience". In your application scenario, if there are many audio/video streams being published at the same time in each room, and each user only wants to subscribe to 1–2 streams of them, we recommend you use the "manual subscription" mode to reduce the traffic costs.
    Parameters
    autoRecvAudioYES: automatic subscription to audio; NO: manual subscription to audio by calling muteRemoteAudio(false). Default value: YES
    autoRecvVideoYES: automatic subscription to video; NO: manual subscription to video by calling startRemoteView. Default value: YES
    Attention
  1. The configuration takes effect only if this API is called before room entry (enterRoom).
  2. In the automatic subscription mode, if the user does not call {@startRemoteView} to subscribe to the video stream after room entry, the SDK will automatically stop subscribing to the video stream in order to reduce the traffic consumption.

◆ setEyeScaleLevel()

abstract void setEyeScaleLevel ( int  eyeScaleLevel)
abstract

Set the strength of eye enlarging filter

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setFaceShortLevel()

abstract void setFaceShortLevel ( int  faceShortlevel)
abstract

Set the strength of face shortening filter

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setFaceSlimLevel()

abstract void setFaceSlimLevel ( int  faceScaleLevel)
abstract

Set the strength of face slimming filter

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setFaceVLevel()

abstract void setFaceVLevel ( int  faceVLevel)
abstract

Set the strength of chin slimming filter

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setFilter()

abstract void setFilter ( Bitmap  image)
abstract

Set color filter

Deprecated:
This API is not recommended after v7.2. Please use getBeautyManager instead.

◆ setFilterConcentration()

abstract void setFilterConcentration ( float  concentration)
abstract

Set the strength of color filter

Deprecated:
This API is not recommended after v7.2. Please use getBeautyManager instead.

◆ setFocusPosition()

abstract void setFocusPosition ( int  x,
int  y 
)
abstract

Set the focal position of camera

Deprecated:
This API is not recommended after v8.0. Please use the setCameraFocusPosition API in TXDeviceManager instead.

◆ setGreenScreenFile()

abstract boolean setGreenScreenFile ( String  file)
abstract

Set green screen video

Deprecated:
This API is not recommended after v7.2. Please use getBeautyManager instead.

◆ setGSensorMode()

abstract void setGSensorMode ( int  mode)
abstract

Set the adaptation mode of G-sensor

You can achieve the following user-friendly interactive experience through this API: When a phone or tablet is rotated upside down, as the capturing direction of the camera does not change, the video image viewed by other users in the room will become upside-down. In this case, you can call this API to let the SDK automatically adjust the rotation direction of the local video image and the image output by the encoder according to the direction of the device's gyroscope, so that remote viewers can see the image in the normal direction.

Parameters
modeG-sensor mode. For more information, please see TRTCGSensorMode. Default value: TRTCGSensorMode_UIAutoLayout

◆ setListener()

abstract void setListener ( TRTCCloudListener  listener)
abstract

Set TRTC event callback

You can use TRTCCloudDelegate to get various event notifications from the SDK, such as error codes, warning codes, and audio/video status parameters.

◆ setListenerHandler()

abstract void setListenerHandler ( Handler  listenerHandler)
abstract

Set the queue that drives the TRTCCloudDelegate event callback

If you do not specify a listenerHandler, the SDK will use MainQueue as the queue for driving TRTCCloudDelegate event callbacks by default. In other words, if you do not set the listenerHandler attribute, all callback functions in TRTCCloudDelegate will be driven by MainQueue.

Parameters
listenerHandler
Attention
If you specify a listenerHandler, please do not manipulate the UI in the TRTCCloudDelegate callback function; otherwise, thread safety issues will occur.

◆ setLocalProcessedAudioFrameCallbackFormat()

abstract int setLocalProcessedAudioFrameCallbackFormat ( TRTCCloudDef.TRTCAudioFrameCallbackFormat  format)
abstract

Set the callback format of preprocessed local audio frames

This API is used to set the AudioFrame format called back by onLocalProcessedAudioFrame:

  • sampleRate: sample rate. Valid values: 16000, 32000, 44100, 48000
  • channel: number of channels (if stereo is used, data is interwoven). Valid values: 1: mono channel; 2: dual channel
  • samplesPerCall: number of sample points, which defines the frame length of the callback data. The frame length must be an integer multiple of 10 ms.

If you want to calculate the callback frame length in milliseconds, the formula for converting the number of milliseconds into the number of sample points is as follows: number of sample points = number of milliseconds * sample rate / 1000 For example, if you want to call back the data of 20 ms frame length with 48000 sample rate, then the number of sample points should be entered as 960 = 20 * 48000 / 1000 Note that the frame length of the final callback is in bytes, and the calculation formula for converting the number of sample points into the number of bytes is as follows: number of bytes = number of sample points * number of channels * 2 (bit width) For example, if the parameters are 48000 sample rate, dual channel, 20 ms frame length, and 960 sample points, then the number of bytes is 3840 = 960 * 2 * 2

Parameters
formatAudio data callback format
Returns
0: success; values smaller than 0: error

◆ setLocalRenderParams()

abstract void setLocalRenderParams ( TRTCCloudDef.TRTCRenderParams  renderParams)
abstract

Set the rendering parameters of local video image

The parameters that can be set include video image rotation angle, fill mode, and mirror mode.

Parameters
paramsVideo image rendering parameters. For more information, please see TRTCRenderParams.

◆ setLocalVideoProcessListener()

abstract int setLocalVideoProcessListener ( int  pixelFormat,
int  bufferType,
TRTCCloudListener.TRTCVideoFrameListener  listener 
)
abstract

Set video data callback for third-party beauty filters

After this callback is set, the SDK will call back the captured video frames through the listener you set and use them for further processing by a third-party beauty filter component. Then, the SDK will encode and send the processed video frames.

Parameters
pixelFormatSpecify the format of the pixel called back. Currently, it supports:
bufferTypeSpecify the format of the data called back. Currently, it supports:
listenerCustom preprocessing callback. For more information, please see TRTCCloudListener.TRTCVideoFrameListener
Returns
0: success; values smaller than 0: error

◆ setLocalVideoRenderListener()

abstract int setLocalVideoRenderListener ( int  pixelFormat,
int  bufferType,
TRTCCloudListener.TRTCVideoRenderListener  listener 
)
abstract

Set the callback of custom rendering for local video

After this callback is set, the SDK will skip its own rendering process and call back the captured data. Therefore, you need to complete image rendering on your own.

  • pixelFormat specifies the format of the data called back. Currently, Texture2D, I420, and RGBA formats are supported.
  • bufferType specifies the buffer type. BYTE_BUFFER is suitable for the JNI layer, while BYTE_ARRAY can be used in direct operations at the Java layer.

For more information, please see Custom Capturing and Rendering.

Parameters
pixelFormatSpecify the format of the video frame, such as:
bufferTypeSpecify the data structure of the video frame:
listenerCallback of custom video rendering. The callback is returned once for each video frame
Returns
0: success; values smaller than 0: error

◆ setLocalViewFillMode()

abstract void setLocalViewFillMode ( int  mode)
abstract

Set the rendering mode of local image

Deprecated:
This API is not recommended after v8.0. Please use setLocalRenderParams instead.

◆ setLocalViewMirror()

abstract void setLocalViewMirror ( int  mirrorType)
abstract

Set the mirror mode of local camera's preview image

Deprecated:
This API is not recommended after v8.0. Please use setLocalRenderParams instead.

◆ setLocalViewRotation()

abstract void setLocalViewRotation ( int  rotation)
abstract

Set the clockwise rotation angle of local image

Deprecated:
This API is not recommended after v8.0. Please use setLocalRenderParams instead.

◆ setLogCompressEnabled()

static void setLogCompressEnabled ( boolean  enabled)
inlinestatic

Enable/Disable local log compression

If compression is enabled, the log size will significantly reduce, but logs can be read only after being decompressed by the Python script provided by Tencent Cloud. If compression is disabled, logs will be stored in plaintext and can be read directly in Notepad, but will take up more storage capacity.

Parameters
enabledSpecify whether to enable it, which is enabled by default

◆ setLogDirPath()

static void setLogDirPath ( String  path)
inlinestatic

Set local log storage path

You can use this API to change the default storage path of the SDK's local logs, which is as follows:

  • Windows: C:/Users/[username]/AppData/Roaming/liteav/log, i.e., under appdata%/liteav/log.
  • iOS or macOS: under sandbox Documents/log.
  • Android: under /app directory/files/log/liteav/.
    Attention
    Please be sure to call this API before all other APIs and make sure that the directory you specify exists and your application has read/write permissions of the directory.
    Parameters
    pathLog storage path

◆ setLogLevel()

static void setLogLevel ( int  level)
inlinestatic

Set log output level

Parameters
levelFor more information, please see TRTCLogLevel. Default value: TRTCLogLevelNone

◆ setLogListener()

static void setLogListener ( final TRTCCloudListener.TRTCLogListener  logListener)
inlinestatic

Set log callback

◆ setMicVolumeOnMixing()

abstract void setMicVolumeOnMixing ( int  volume)
abstract

Set mic volume

Deprecated:
This API is not recommended after v6.9. Please use setAudioCaptureVolume instead.

◆ setMixedPlayAudioFrameCallbackFormat()

abstract int setMixedPlayAudioFrameCallbackFormat ( TRTCCloudDef.TRTCAudioFrameCallbackFormat  format)
abstract

Set the callback format of audio frames to be played back by system

This API is used to set the AudioFrame format called back by onMixedPlayAudioFrame:

  • sampleRate: sample rate. Valid values: 16000, 32000, 44100, 48000
  • channel: number of channels (if stereo is used, data is interwoven). Valid values: 1: mono channel; 2: dual channel
  • samplesPerCall: number of sample points, which defines the frame length of the callback data. The frame length must be an integer multiple of 10 ms.

If you want to calculate the callback frame length in milliseconds, the formula for converting the number of milliseconds into the number of sample points is as follows: number of sample points = number of milliseconds * sample rate / 1000 For example, if you want to call back the data of 20 ms frame length with 48000 sample rate, then the number of sample points should be entered as 960 = 20 * 48000 / 1000 Note that the frame length of the final callback is in bytes, and the calculation formula for converting the number of sample points into the number of bytes is as follows: number of bytes = number of sample points * number of channels * 2 (bit width) For example, if the parameters are 48000 sample rate, dual channel, 20 ms frame length, and 960 sample points, then the number of bytes is 3840 = 960 * 2 * 2

Parameters
formatAudio data callback format
Returns
0: success; values smaller than 0: error

◆ setMixExternalAudioVolume()

abstract void setMixExternalAudioVolume ( int  publishVolume,
int  playoutVolume 
)
abstract

Set the publish volume and playback volume of mixed custom audio track

Parameters
publishVolumeset the publish volume,from 0 to 100, -1 means no change
playoutVolumeset the play volume,from 0 to 100, -1 means no change

◆ setMixTranscodingConfig()

abstract void setMixTranscodingConfig ( TRTCCloudDef.TRTCTranscodingConfig  config)
abstract

Set the layout and transcoding parameters of On-Cloud MixTranscoding

In a live room, there may be multiple anchors publishing their audio/video streams at the same time, but for audience on CSS CDN, they only need to watch one video stream in HTTP-FLV or HLS format. When you call this API, the SDK will send a command to the TRTC mixtranscoding server to combine multiple audio/video streams in the room into one stream. You can use the TRTCTranscodingConfig parameter to set the layout of each channel of image. You can also set the encoding parameters of the mixed audio/video streams. For more information, please see On-Cloud MixTranscoding.

    **Image 1** => decoding ====> \
                                   \
    **Image 2** => decoding => image mixing => encoding => **mixed image**
                                   //
    **Image 3** => decoding ====> //
    **Audio 1** => decoding ====> \
                                   \
    **Audio 2** => decoding => audio mixing => encoding => **mixed audio**
                                   //
    **Audio 3** => decoding ====> //
Parameters
configIf config is not empty, On-Cloud MixTranscoding will be started; otherwise, it will be stopped. For more information, please see TRTCTranscodingConfig.
Attention
Notes on On-Cloud MixTranscoding:
  • If the user calling this API does not set streamId in the config parameter, TRTC will mix the multiple channels of images in the room into the audio/video streams corresponding to the current user, i.e., A + B => A.
  • If the user calling this API sets streamId in the config parameter, TRTC will mix the multiple channels of images in the room into the specified streamId, i.e., A + B => streamId.
  • Please note that if you are still in the room but do not need mixtranscoding anymore, be sure to call this API again and leave config empty to cancel it; otherwise, additional fees may be incurred.
  • Please rest assured that TRTC will automatically cancel the mixtranscoding status upon room exit.

◆ setMotionMute()

abstract void setMotionMute ( boolean  motionMute)
abstract

Mute animated sticker

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setNetEnv()

static native void setNetEnv ( int  env)
static

Set TRTC backend cluster (for use by Tencent Cloud R&D team only)

◆ setNetworkQosParam()

abstract void setNetworkQosParam ( TRTCCloudDef.TRTCNetworkQosParam  param)
abstract

Set network quality control parameters

This setting determines the quality control policy in a poor network environment, such as "image quality preferred" or "smoothness preferred".

Parameters
paramIt is used to set relevant parameters for network quality control. For details, please refer to TRTCNetworkQosParam.

◆ setNoseSlimLevel()

abstract void setNoseSlimLevel ( int  noseSlimLevel)
abstract

Set the strength of nose slimming filter

Deprecated:
This API is not recommended after v6.9. Please use getBeautyManager instead.

◆ setPriorRemoteVideoStreamType()

abstract int setPriorRemoteVideoStreamType ( int  streamType)
abstract

Specify whether to view the big or small image

Deprecated:
This API is not recommended after v8.0. Please use startRemoteView:streamType:view: instead.

◆ setRemoteAudioVolume()

abstract void setRemoteAudioVolume ( String  userId,
int  volume 
)
abstract

Set the audio playback volume of remote user

You can mute the audio of a remote user through setRemoteAudioVolume(userId, 0).

Parameters
userIdID of the specified remote user
volumeVolume. 100 is the original volume. Value range: [0,150]. Default value: 100
Attention
If 100 is still not loud enough for you, you can set the volume to up to 150, but there may be side effects.

◆ setRemoteRenderParams()

abstract void setRemoteRenderParams ( String  userId,
int  streamType,
TRTCCloudDef.TRTCRenderParams  renderParams 
)
abstract

Set the rendering mode of remote video image

The parameters that can be set include video image rotation angle, fill mode, and mirror mode.

Parameters
userIdID of the specified remote user
streamTypeIt can be set to the primary stream image (TRTCVideoStreamTypeBig) or substream image (TRTCVideoStreamTypeSub).
paramsVideo image rendering parameters. For more information, please see TRTCRenderParams.

◆ setRemoteSubStreamViewFillMode()

abstract void setRemoteSubStreamViewFillMode ( String  userId,
int  mode 
)
abstract

Set the fill mode of substream image

Deprecated:
This API is not recommended after v8.0. Please use setRemoteRenderParams:streamType:params: instead.

◆ setRemoteSubStreamViewRotation()

abstract void setRemoteSubStreamViewRotation ( final String  userId,
final int  rotation 
)
abstract

Set the clockwise rotation angle of substream image

Deprecated:
This API is not recommended after v8.0. Please use setRemoteRenderParams:streamType:params: instead.

◆ setRemoteVideoRenderListener()

abstract int setRemoteVideoRenderListener ( String  userId,
int  pixelFormat,
int  bufferType,
TRTCCloudListener.TRTCVideoRenderListener  listener 
)
abstract

Set the callback of custom rendering for remote video

After this callback is set, the SDK will skip its own rendering process and call back the captured data. Therefore, you need to complete image rendering on your own.

  • pixelFormat specifies the format of the called back data, such as NV12, I420, and 32BGRA.
  • bufferType specifies the buffer type. PixelBuffer has the highest efficiency, while NSData makes the SDK perform a memory conversion internally, which will result in extra performance loss.

For more information, please see Custom Capturing and Rendering.

Attention
Before this API is called, startRemoteView(nil) needs to be called to get the video stream of the remote user (view can be set to nil for this end); otherwise, there will be no data called back.
Parameters
userIdID of the specified remote user
pixelFormatSpecify the format of the pixel called back
bufferTypeSpecify video data structure type.
listenerlisten for custom rendering
Returns
0: success; values smaller than 0: error

◆ setRemoteVideoStreamType()

abstract int setRemoteVideoStreamType ( String  userId,
int  streamType 
)
abstract

Switch the big/small image of specified remote user

After an anchor in a room enables dual-channel encoding, the video image that other users in the room subscribe to through startRemoteView will be HD big image by default. You can use this API to select whether the image subscribed to is the big image or small image. The API can take effect before or after startRemoteView is called.

Attention
To implement this feature, the target user must have enabled the dual-channel encoding mode through enableEncSmallVideoStream; otherwise, this API will not work.
Parameters
userIdID of the specified remote user
streamTypeVideo stream type, i.e., big image or small image. Default value: big image

◆ setRemoteViewFillMode()

abstract void setRemoteViewFillMode ( String  userId,
int  mode 
)
abstract

Set the rendering mode of remote image

Deprecated:
This API is not recommended after v8.0. Please use setRemoteRenderParams:streamType:params: instead.

◆ setRemoteViewRotation()

abstract void setRemoteViewRotation ( String  userId,
int  rotation 
)
abstract

Set the clockwise rotation angle of remote image

Deprecated:
This API is not recommended after v8.0. Please use setRemoteRenderParams:streamType:params: instead.

◆ setReverbType()

abstract void setReverbType ( int  reverbType)
abstract

Set reverb effect

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setVoiceReverbType instead.

◆ setSubStreamEncoderParam()

abstract void setSubStreamEncoderParam ( TRTCCloudDef.TRTCVideoEncParam  param)
abstract

Set the video encoding parameters of screen sharing (i.e., substream) (for desktop systems only)

This API can set the image quality of screen sharing (i.e., the substream) viewed by remote users, which is also the image quality of screen sharing in on-cloud recording files. Please note the differences between the following two APIs:

Parameters
paramSubstream encoding parameters. For more information, please see TRTCVideoEncParam.
Attention
Even if you use the primary stream to transfer screen sharing data (set type=TRTCVideoStreamTypeBig when calling startScreenCapture), you still need to call the setSubStreamEncoderParam API instead of the {} API to set the screen sharing encoding parameters.

◆ setSystemVolumeType()

abstract void setSystemVolumeType ( int  type)
abstract

Setting the system volume type (for mobile OS)

Deprecated:
This API is not recommended after v8.0. Please use the setSystemVolumeType API in TXDeviceManager instead.

◆ setVideoEncoderMirror()

abstract void setVideoEncoderMirror ( boolean  mirror)
abstract

Set the mirror mode of image output by encoder

This setting does not affect the mirror mode of the local video image, but affects the mirror mode of the image viewed by other users in the room (and on-cloud recording files).

Parameters
mirrorWhether to enable remote mirror mode. true: yes; false: no. Default value: false

◆ setVideoEncoderParam()

abstract void setVideoEncoderParam ( TRTCCloudDef.TRTCVideoEncParam  param)
abstract

Set the encoding parameters of video encoder

This setting can determine the quality of image viewed by remote users, which is also the image quality of on-cloud recording files.

Parameters
paramIt is used to set relevant parameters for the video encoder. For more information, please see TRTCVideoEncParam.

◆ setVideoEncoderRotation()

abstract void setVideoEncoderRotation ( int  rotation)
abstract

Set the direction of image output by video encoder

This setting does not affect the preview direction of the local video image, but affects the direction of the image viewed by other users in the room (and on-cloud recording files). When a phone or tablet is rotated upside down, as the capturing direction of the camera does not change, the video image viewed by other users in the room will become upside-down. In this case, you can call this API to rotate the image encoded by the SDK 180 degrees, so that other users in the room can view the image in the normal direction. If you want to achieve the aforementioned user-friendly interactive experience, we recommend you directly call setGSensorMode to implement smarter direction adaptation, with no need to call this API manually.

Parameters
rotationCurrently, rotation angles of 0 and 180 degrees are supported. Default value: TRTCVideoRotation_0 (no rotation)

◆ setVideoMuteImage()

abstract void setVideoMuteImage ( Bitmap  image,
int  fps 
)
abstract

Set placeholder image during local video pause

When you call muteLocalVideo(true) to pause the local video image, you can set a placeholder image by calling this API. Then, other users in the room will see this image instead of a black screen.

Parameters
imagePlaceholder image. A null value means that no more video stream data will be sent after muteLocalVideo. The default value is null.
fpsFrame rate of the placeholder image. Minimum value: 5. Maximum value: 10. Default value: 5

◆ setVoiceChangerType()

abstract boolean setVoiceChangerType ( int  voiceChangerType)
abstract

Set voice changing type

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#setVoiceChangerType instead.

◆ setWatermark()

abstract void setWatermark ( Bitmap  image,
int  streamType,
float  x,
float  y,
float  width 
)
abstract

Add watermark

The watermark position is determined by the rect parameter, which is a quadruple in the format of (x, y, width, height).

  • x: X coordinate of watermark, which is a floating-point number between 0 and 1.
  • y: Y coordinate of watermark, which is a floating-point number between 0 and 1.
  • width: width of watermark, which is a floating-point number between 0 and 1.
  • height: it does not need to be set. The SDK will automatically calculate it according to the watermark image's aspect ratio.

Sample parameter: If the encoding resolution of the current video is 540x960, and the rect parameter is set to (0.1, 0.1, 0.2, 0.0), then the coordinates of the top-left point of the watermark will be (540 *, 960 * 0.1), i.e., (54, 96), the watermark width will be 540 * 0.2 = 108 px, and the watermark height will be calculated automatically by the SDK based on the watermark image's aspect ratio.

Parameters
imageWatermark image, which must be a PNG image with transparent background
streamTypeSpecify for which image to set the watermark. For more information, please see TRTCVideoStreamType.
rectUnified coordinates of the watermark relative to the encoded resolution. Value range of x, y, width, and height: 0–1.
Attention
If you want to set watermarks for both the primary image (generally for the camera) and the substream image (generally for screen sharing), you need to call this API twice with streamType set to different values.

◆ setZoom()

abstract void setZoom ( int  distance)
abstract

Set camera zoom ratio (focal length)

Deprecated:
This API is not recommended after v8.0. Please use the setCameraZoomRatio API in TXDeviceManager instead.

◆ sharedInstance()

static TRTCCloud sharedInstance ( Context  context)
inlinestatic

Create TRTCCloud instance (singleton mode)

◆ showDebugView()

abstract void showDebugView ( int  showType)
abstract

Display dashboard

"Dashboard" is a semi-transparent floating layer for debugging information on top of the video rendering control. It is used to display audio/video information and event information to facilitate integration and debugging.

Parameters
showType0: does not display; 1: displays lite edition (only with audio/video information); 2: displays full edition (with audio/video information and event information).

◆ snapshotVideo()

abstract void snapshotVideo ( String  userId,
int  streamType,
TRTCCloudListener.TRTCSnapshotListener  listener 
)
abstract

Screencapture video

You can use this API to screencapture the local video image or the primary stream image and substream (screen sharing) image of a remote user.

Parameters
userIdUser ID. A null value indicates to screencapture the local video.
streamTypeVideo stream type, which can be the primary stream image (TRTCVideoStreamTypeBig, generally for camera) or substream image (TRTCVideoStreamTypeSub, generally for screen sharing)
sourceTypeVideo image source, which can be the video stream image (TRTCSnapshotSourceTypeStream, generally in higher definition) or the video rendering image (TRTCSnapshotSourceTypeView)
Attention
On Windows, only video image from the TRTCSnapshotSourceTypeStream source can be screencaptured currently.

◆ startAudioRecording()

abstract int startAudioRecording ( TRTCCloudDef.TRTCAudioRecordingParams  param)
abstract

Start audio recording

After you call this API, the SDK will selectively record local and remote audio streams (such as local audio, remote audio, background music, and sound effects) into a local file. This API works when called either before or after room entry. If a recording task has not been stopped through stopAudioRecording before room exit, it will be automatically stopped after room exit.

Parameters
paramRecording parameter. For more information, please see TRTCAudioRecordingParams
Returns
0: success; -1: audio recording has been started; -2: failed to create file or directory; -3: the audio format of the specified file extension is not supported

◆ startLocalAudio() [1/2]

abstract void startLocalAudio ( )
abstract

Set sound quality

Deprecated:
This API is not recommended after v8.0. Please use startLocalAudio:(quality) instead.

◆ startLocalAudio() [2/2]

abstract void startLocalAudio ( int  quality)
abstract

Enable local audio capturing and publishing

The SDK does not enable the mic by default. When a user wants to publish the local audio, the user needs to call this API to enable mic capturing and encode and publish the audio to the current room. After local audio capturing and publishing is enabled, other users in the room will receive the onUserAudioAvailable(userId, true) notification.

Parameters
qualitySound quality
  • TRTCAudioQualitySpeech - Smooth: sample rate: 16 kHz; mono channel; audio bitrate: 16 Kbps. This is suitable for audio call scenarios, such as online meeting and audio call.
  • TRTCAudioQualityDefault - Default: sample rate: 48 kHz; mono channel; audio bitrate: 50 Kbps. This is the default sound quality of the SDK and recommended if there are no special requirements.
  • TRTCAudioQualityMusic - HD: sample rate: 48 kHz; dual channel + full band; audio bitrate: 128 Kbps. This is suitable for scenarios where Hi-Fi music transfer is required, such as online karaoke and music live streaming.
Attention
This API will check the mic permission. If the current application does not have permission to use the mic, the SDK will automatically ask the user to grant the mic permission.

◆ startLocalPreview()

abstract void startLocalPreview ( boolean  frontCamera,
TXCloudVideoView  view 
)
abstract

Enable the preview image of local camera (mobile)

If this API is called before enterRoom, the SDK will only enable the camera and wait until enterRoom is called before starting push. If it is called after enterRoom, the SDK will enable the camera and automatically start pushing the video stream. When the first camera video frame starts to be rendered, you will receive the onCameraDidReady callback in TRTCCloudDelegate.

Parameters
frontCameraYES: front camera; NO: rear camera
viewControl that carries the video image
Attention
If you want to preview the camera image and adjust the beauty filter parameters through BeautyManager before going live, you can:
  • Scheme 1. Call startLocalPreview before calling enterRoom
  • Scheme 2. Call startLocalPreview and muteLocalVideo(true) after calling enterRoom

◆ startLocalRecording()

abstract void startLocalRecording ( TRTCCloudDef.TRTCLocalRecordingParams  params)
abstract

Start local media recording

This API records the audio/video content during live streaming into a local file.

Parameters
paramsRecording parameter. For more information, please see TRTCLocalRecordingParams

◆ startPublishCDNStream()

abstract void startPublishCDNStream ( TRTCCloudDef.TRTCPublishCDNParam  param)
abstract

Start publishing audio/video streams to non-Tencent Cloud CDN

This API is similar to the startPublishing API. The difference is that startPublishing can only publish audio/video streams to Tencent Cloud CDN, while this API can relay streams to live streaming CDN services of other cloud providers.

Parameters
paramCDN relaying parameter. For more information, please see TRTCPublishCDNParam
Attention
  • Using the startPublishing API to publish audio/video streams to Tencent Cloud CSS CDN does not incur additional fees.
  • Using the startPublishCDNStream API to publish audio/video streams to non-Tencent Cloud CDN incurs additional relaying bandwidth fees.

◆ startPublishing()

abstract void startPublishing ( final String  streamId,
final int  streamType 
)
abstract

Start publishing audio/video streams to Tencent Cloud CSS CDN

This API sends a command to the TRTC server, requesting it to relay the current user's audio/video streams to CSS CDN. You can set the StreamId of the live stream through the streamId parameter, so as to specify the playback address of the user's audio/video streams on CSS CDN. For example, if you specify the current user's live stream ID as user_stream_001 through this API, then the corresponding CDN playback address is: "http://yourdomain/live/user_stream_001.flv", where yourdomain is your playback domain name with an ICP filing. You can configure your playback domain name in the CSS console. Tencent Cloud does not provide a default playback domain name. You can also specify the streamId when setting the TRTCParams parameter of enterRoom, which is the recommended approach.

Parameters
streamIdCustom stream ID.
streamTypeOnly TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported.
Attention
You need to enable the "Enable Relayed Push" option on the "Function Configuration" page in the TRTC console in advance.
  • If you select "Specified stream for relayed push", you can use this API to push the corresponding audio/video stream to Tencent Cloud CDN and specify the entered stream ID.
  • If you select "Global auto-relayed push", you can use this API to adjust the default stream ID.

◆ startRemoteSubStreamView()

abstract void startRemoteSubStreamView ( String  userId,
TXCloudVideoView  view 
)
abstract

Start displaying the substream image of remote user

Deprecated:
This API is not recommended after v8.0. Please use startRemoteView:streamType:view instead.

◆ startRemoteView() [1/2]

abstract void startRemoteView ( String  userId,
int  streamType,
TXCloudVideoView  view 
)
abstract

Subscribe to remote user's video stream and bind video rendering control

Calling this API allows the SDK to pull the video stream of the specified userId and render it to the rendering control specified by the view parameter. You can set the display mode of the video image through setRemoteRenderParams.

  • If you already know the userId of a user who has a video stream in the room, you can directly call startRemoteView to subscribe to the user's video image.
  • If you don't know which users in the room are publishing video streams, you can wait for the notification from onUserVideoAvailable after enterRoom.

Calling this API only starts pulling the video stream, and the image needs to be loaded and buffered at this time. After the buffering is completed, you will receive a notification from onFirstVideoFrame.

Parameters
userIdID of the specified remote user
streamTypeVideo stream type of the userId specified for watching:
viewRendering control that carries the video image
Attention
The following requires your attention:
  1. The SDK supports watching the big image and substream image or small image and substream image of a userId at the same time, but does not support watching the big image and small image at the same time.
  2. Only when the specified userId enables dual-channel encoding through enableEncSmallVideoStream can the user's small image be viewed.
  3. If the small image of the specified userId does not exist, the SDK will switch to the big image of the user by default.

◆ startRemoteView() [2/2]

abstract void startRemoteView ( String  userId,
TXCloudVideoView  view 
)
abstract

Start displaying remote video image

Deprecated:
This API is not recommended after v8.0. Please use startRemoteView:streamType:view instead.

◆ startScreenCapture() [1/2]

abstract void startScreenCapture ( int  streamType,
TRTCCloudDef.TRTCVideoEncParam  encParams,
TRTCCloudDef.TRTCScreenShareParams  shareParams 
)
abstract

Start screen sharing

This API supports capturing the screen of the entire Android system, which can implement system-wide screen sharing similar to VooV Meeting. For more information, please see Android Video encoding parameters recommended for screen sharing on Android (TRTCVideoEncParam):

  • Resolution (videoResolution): 1280x720
  • Frame rate (videoFps): 10 fps
  • Bitrate (videoBitrate): 1200 Kbps
  • Resolution adaption (enableAdjustRes): false
Parameters
encParamsEncoding parameters. For more information, please see TRTCCloudDef#TRTCVideoEncParam. If encParams is set to null, the SDK will automatically use the previously set encoding parameter.
shareParamsFor more information, please see TRTCCloudDef#TRTCScreenShareParams. You can use the floatingView parameter to pop up a floating window (you can also use Android's WindowManager parameter to configure automatic pop-up).

◆ startScreenCapture() [2/2]

abstract void startScreenCapture ( TRTCCloudDef.TRTCVideoEncParam  encParams,
TRTCCloudDef.TRTCScreenShareParams  shareParams 
)
abstract

Start screen sharing

Deprecated:
This API is not recommended after v8.6. Please use startScreenCapture(streamType, encParams, shareParams) instead.

◆ startSpeedTest()

abstract void startSpeedTest ( int  sdkAppId,
String  userId,
String  userSig 
)
abstract

Start network speed test (used before room entry)

As TRTC involves real-time audio/video transfer services very sensitive to the transfer latency, it has high requirements for network stability. For most users, if their network environments are below TRTC's minimum requirements, direct room entry will cause a very poor user experience. The recommended approach is to perform the network speed test before the user enters the room, so that a reminder can be displayed on the UI to prompt the user to switch to a better network (such as from Wi-Fi to 4G) first before room entry if the user's network is poor.

Attention
  1. The speed test will consume a certain amount of traffic and generate a small amount of extra traffic fees as a result.
  2. Please perform the speed test before room entry, because if performed after room entry, the test will affect the normal audio/video transfer, and its result will be inaccurate due to interference in the room.
Parameters
sdkAppIdApplication ID. For more information, please see TRTCParams.
userIdUser ID. For more information, please see TRTCParams.
userSigUser signature. For more information, please see TRTCParams.

◆ stopAllAudioEffects()

abstract void stopAllAudioEffects ( )
abstract

Stop all sound effects

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#stopPlayMusic instead.

◆ stopAllRemoteView()

abstract void stopAllRemoteView ( )
abstract

Stop subscribing to all remote users' video streams and release all rendering resources

Calling this API will cause the SDK to stop receiving all remote video streams and release all decoding and rendering resources.

Attention
If a substream image (screen sharing) is being displayed, it will also be stopped.

◆ stopAudioEffect()

abstract void stopAudioEffect ( int  effectId)
abstract

Stop sound effect

Deprecated:
This API is not recommended after v7.3. Please use TXAudioEffectManager#stopPlayMusic instead.

◆ stopAudioRecording()

abstract void stopAudioRecording ( )
abstract

Stop audio recording

If a recording task has not been stopped through this API before room exit, it will be automatically stopped after room exit.

◆ stopBGM()

abstract void stopBGM ( )
abstract

Stop background music

Deprecated:
This API is not recommended after v7.3. Please use getAudioEffectManager instead.

◆ stopLocalAudio()

abstract void stopLocalAudio ( )
abstract

Stop local audio capturing and publishing

After local audio capturing and publishing is stopped, other users in the room will receive the onUserAudioAvailable(userId, false) notification.

◆ stopLocalPreview()

abstract void stopLocalPreview ( )
abstract

Stop camera preview

◆ stopLocalRecording()

abstract void stopLocalRecording ( )
abstract

Stop local media recording

If a recording task has not been stopped through this API before room exit, it will be automatically stopped after room exit.

◆ stopPublishCDNStream()

abstract void stopPublishCDNStream ( )
abstract

Stop publishing audio/video streams to non-Tencent Cloud CDN

◆ stopPublishing()

abstract void stopPublishing ( )
abstract

Stop publishing audio/video streams to Tencent Cloud CSS CDN

◆ stopRemoteSubStreamView()

abstract void stopRemoteSubStreamView ( String  userId)
abstract

Stop displaying the substream image of remote user

Deprecated:
This API is not recommended after v8.0. Please use stopRemoteView:streamType: instead.

◆ stopRemoteView() [1/2]

abstract void stopRemoteView ( String  userId)
abstract

Stop displaying remote video image and pulling the video data stream of remote user

Deprecated:
This API is not recommended after v8.0. Please use stopRemoteView:streamType: instead.

◆ stopRemoteView() [2/2]

abstract void stopRemoteView ( String  userId,
int  streamType 
)
abstract

Stop subscribing to remote user's video stream and release rendering control

Calling this API will cause the SDK to stop receiving the user's video stream and release the decoding and rendering resources for the stream.

Parameters
userIdID of the specified remote user
streamTypeVideo stream type of the userId specified for watching:

◆ stopScreenCapture()

abstract void stopScreenCapture ( )
abstract

Stop screen sharing

◆ stopSpeedTest()

abstract void stopSpeedTest ( )
abstract

Stop network speed test

◆ switchCamera()

abstract void switchCamera ( )
abstract

Switch camera

Deprecated:
This API is not recommended after v8.0. Please use the switchCamera API in TXDeviceManager instead.

◆ switchRole()

abstract void switchRole ( int  role)
abstract

Switch role

This API is used to switch the user role between "anchor" and "audience". As video live rooms and audio chat rooms need to support an audience of up to 100,000 concurrent online users, the rule "only anchors can publish their audio/video streams" has been set. Therefore, when some users want to publish their streams (so that they can interact with anchors), they need to switch their role to "anchor" first. You can use the role field in TRTCParams during room entry to specify the user role in advance or use the switchRole API to switch roles after room entry.

Parameters
roleRole, which is "anchor" by default:

◆ switchRoom()

abstract void switchRoom ( final TRTCCloudDef.TRTCSwitchRoomConfig  config)
abstract

Switch room

This API is used to quickly switch a user from one room to another.

  • If the user's role is "audience", calling this API is equivalent to exitRoom (current room) + enterRoom (new room).
  • If the user's role is "anchor", the API will retain the current audio/video publishing status while switching the room; therefore, during the room switch, camera preview and sound capturing will not be interrupted. This API is suitable for the online education scenario where the supervising teacher can perform fast room switch across multiple rooms. In this scenario, using switchRoom can get better smoothness and use less code than exitRoom + enterRoom. The API call result will be called back through onSwitchRoom(errCode, errMsg) in TRTCCloudDelegate.
    Parameters
    configRoom parameter. For more information, please see TRTCSwitchRoomConfig.
    Attention
    Due to the requirement for compatibility with legacy versions of the SDK, the config parameter contains both roomId and strRoomId parameters. You should pay special attention as detailed below when specifying these two parameters:
  1. If you decide to use strRoomId, then set roomId to 0. If both are specified, roomId will be used.
  2. All rooms need to use either strRoomId or roomId at the same time. They cannot be mixed; otherwise, there will be many unexpected bugs.

◆ updateLocalView()

abstract void updateLocalView ( TXCloudVideoView  view)
abstract

Update the preview image of local camera

◆ updateRemoteView()

abstract void updateRemoteView ( String  userId,
int  streamType,
TXCloudVideoView  view 
)
abstract

Update remote user's video rendering control

This API can be used to update the rendering control of the remote video image. It is often used in interactive scenarios where the display area needs to be switched.

Parameters
viewControl that carries the video image
streamTypeType of the stream for which to set the preview window (only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported)
userIdID of the specified remote user