In the first article we’ve covered the WebRTC basics and have learned the theory about it, while in the second one we’ve moved from theory to practice and thus have learned how to implement the WebRTC signaling server with Microsoft SignalR and WebRTC web app with JavaScript API in Angular 11. In this article we are adding another piece to the puzzle – WebRTC native API with Android.

Why going native?

Even though WebRTC is initially created as only web-based technology that doesn’t require developing and implementing native apps for multiple systems, through the years many advantages of such use-cases have been recognized and thus WebRTC native APIs have been created. By choosing to create a native WebRTC app, it is possible to:

  • Fully access all the native OS/hardware resources
  • Remove the browser limitations and incompatibilities
  • Integrate the WebRTC implementation in any convenient app environment
  • Design a fully optimized WebRTC experience for specific device(s)
  • Build a generally faster and smoother WebRTC experience

While it seems that going native with WebRTC mostly has positive aspects, it’s important to say that building such a solution usually takes more knowledge, time, resources, and thus isn’t recommended for basic WebRTC solution needs like basic video or audio chat. The cases where native WebRTC implementation is recommended include:

  • feature specific applications (e.g. adding multiple video streams from several video sources ,to one WebRTC video call, running custom client-side video encoding/decoding, etc.)
  • performance sensitive applications (e.g. delay sensitive, video quality sensitive, sound quality sensitive, etc.),
  • conference call applications (for multiple participants connected at once).

Creating the native Android app

WebRTC and Android

Before we start it’s important to note that the same WebRTC signaling server and server-side algorithm explained in the last article will be used as a part of final solution here, so make sure to read that article before going forward.

For the development of the Android app in this article we will use Android Studio and Java programming language. The development process will consist of five main steps:

1)      Android (Java) project

Create a new Android (Java) project in Android Studio and choose desired minimum SDK API (e.g. API 19 – Android 4.4 KitKat). When new project is created, go to the app’s “build.gradle” file and add the following library dependencies:

implementation ‘com.google.code.gson:gson:2.8.5’

implementation ‘org.webrtc:google-webrtc:1.0.25821’

implementation ‘com.microsoft.signalr:signalr:3.1.6’ 

2)      SignalR service

Similar to the way it has been done in Angular code, Android implementation also needs a SignalR service which starts and maintains the connection to signaling server and is also in duty of sending and receiving SignalR requests. The important parts of Android’s SignalR service worth explaining are the following:

  • Service’s constructor – it initiates the active connection with the signaling server and applies the JWT security token if necessary
  • connect” method – starts the connection to the signaling server
  • define” method – defines incoming socket requests
  • invoke” method – executes new given socket request
  • disconnect” method – stops the connection to the signaling server

The entire code for the Android SignalR service can be found here.

3)      Android camera capturer

An important part of Android WebRTC app implementation is to create a camera capturer class which will take care of all camera management related functions, such as accessing the device’s camera, starting the video recording, changing the resolution, changing the brightness/contrast etc. The recommended way of developing this class is to extend WebRTC library’s Camera1Capturer base class (or Camera2Capturer depending on the target SDK) and develop the needed custom features on top of that. A basic implementation of an Android camera capturer can be found here. For more advanced features we’d need to expand the capturer with Android camera2 API features, but that’s a story for another time.

4)      WebRTC algorithm implementation

The last and most important thing to create while developing a native WebRTC solution is of course the WebRTC algorithm, meaning the steps required for WebRTC call to be establishing and audio/video data to be transferred. The first two steps (starting a connection to the signaling server and defining the signaling communication) are basically the same as in the JavaScript implementation, only difference being the programming language, so make to read about them in that article. In the contrast, the last step (getting the user media from the device) differs quite a lot, meaning Android implementation it’s required to:

  • Initialize the peer connection factory options
  • Create a peer connection factory instance
  • Create audio source and audio track instances
  • Create video source and video track instances
  • Initialize the Android camera capturer(s)
  • Start the camera capture with chosen resolution/frame rate

The code for all of those actions, as well as the general WebRTC algorithm implementation in Android, is available here.

5)      Security over JWT

Since we are using Microsoft SignalR as our signaling server solution, our native Android WebRTC app also inherits all the security features that SignalR bring to the table. For our case, this includes authorization, which is done through fetching the JWT security token via special AuthHub on signaling server. Later, this token is applied when connecting to the signaling server through the access token provider inside the SignalR service. The fetching of the JWT token can be seen here, while the SignalR service can be seen here.

After each of the five steps have been implemented, the secure WebRTC connection can be established, and real-time audio/video data transfer can begin! The complete Android project that was showcased in this article can be found here.

Security over JWT WebRTC & Android

To develop a native WebRTC application, or maybe not?

In this article we’ve shown what’s possible with native Android WebRTC library and how to implement it in a working solution with Microsoft SignalR. Whether to build a native WebRTC app, or rather go for standard web app, is fully based on project needs, however avoiding native approach because it’s too demanding or costly shouldn’t be the case anymore since over the years native libraries have evolved, and as shown here, the implementation is just as simple and straight forward as standard one.

What's your reaction?