Start using packagecloud in minutes
Join thousands of developers using packagecloud to distribute software securely, consistently, and affordably.
README
last updated: Thu 04/17/25 at 07:47:29 PM bylucas-yupistudios
BioPass ID Face SDK Android
Latest Version
April 17, 2025 - [v5.0.2]
Table of Contents
Quick Start Guide
First, you will need a license key to use the SDK. To get your license key contact us through our website BioPass ID.
Check out our official documentation for more in depth information on BioPass ID.
1. Prerequisites:
- Java 17 or higher
- Kotlin 1.9.0 or highr
- Gradle 8.6 or higher
- Android Gradle Plugin 8.4.0 or higher
- A device with a camera
- License key
- Internet connection is required to verify the license
Before proceeding, you should add the following dependencies in your app/build.gradle
file:
dependencies
implementation 'com.biopassid:dlibwrapper:1.0.0'
implementation 'com.google.mediapipe:tasks-vision:0.10.13'
implementation 'com.google.mlkit:face-detection:16.1.6'
implementation 'androidx.camera:camera-core:1.3.4'
implementation 'androidx.camera:camera-camera2:1.3.4'
implementation 'androidx.camera:camera-lifecycle:1.3.4'
implementation 'androidx.camera:camera-view:1.3.4'
Then on your settings.gradle
file:
repositories
maven
url "https://packagecloud.io/biopassid/dlibwrapper/maven2"
Change the minimum Android sdk version to 24 (or higher) in your app/build.gradle
file.
minSdkVersion 24
2. Installation
With Gradle
The simplest and easiest way to install the plugin to your project, is to just add the following dependencies to your app/build.gradle
:
dependencies
implementation 'com.biopassid:facesdk:5.0.2'
Then on your settings.gradle
file:
repositories
maven
url "https://packagecloud.io/biopassid/FaceSDKAndroid/maven2"
With Local File
Another alternative to use Face SDK is to download and install the AAR file locally. Here you can find the latest releases and after downloading place the .aar file in any folder of your choice.
We will use Android Studio for the following steps:
- First, with your project open, go to File --> Project Structure --> Dependencies.
- Then in the Dependencies tab, select your app in the modules tab and click on the plus symbol to show the option to add a JAR/AAR dependency.
- On step 1 input the AAR file path, and select the implementation option on step 2.
- Just rebuild your project and should be ready to use.
3. How to use
By now you should have all the tools available to start using the plugin in your own project. Through FaceConfig, you can configure custom settings (such as colors and features). Below you will see some examples of how to use the SDK:
Basic example using face detection
For this example we will use basic facial detection.
activity_main
In your xml layout of your main activity:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="@+id/btnCapture"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Capture Face"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
MainActivity
In your main activity:
package com.example.facedemo
import android.os.Bundle
import android.util.Log
import android.widget.Button
import androidx.appcompat.app.AppCompatActivity
import br.com.biopassid.facesdk.Face
import br.com.biopassid.facesdk.FaceCallback
import br.com.biopassid.facesdk.config.FaceConfig
import br.com.biopassid.facesdk.model.FaceAttributes
import br.com.biopassid.facesdk.model.FaceImage
class MainActivity : AppCompatActivity()
private lateinit var btnCapture: Button
override fun onCreate(savedInstanceState: Bundle?)
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
// Button in your xml layout responsible for calling the Face SDK
btnCapture = findViewById(R.id.btnCapture)
// Instantiate FaceConfig with your preferred settings
val config = FaceConfig(licenseKey = "your-license-key")
// If you want to use Liveness, uncomment this line
//config.liveness.enabled = true
// Define a FaceCallback to receive the image bitmap
val callback = object: FaceCallback
override fun onFaceCapture(
faceImage: FaceImage,
faceAttributes: FaceAttributes? // Only available on Liveness
)
Log.d(TAG, "onFaceCapture: $faceImage")
// Only available on Liveness
Log.d(TAG, "onFaceCapture: $faceAttributes")
// Only available on Liveness
override fun onFaceDetected(faceAttributes: FaceAttributes)
Log.d(TAG, "onFaceDetected: $faceAttributes")
// Start Face capture
btnCapture.setOnClickListener
Face.takeFace(this, config, callback)
companion object
private const val TAG = "FaceDemo"
Example using face detection and Retrofit to call the BioPass ID API
In this example we use facial detection in liveness mode to perform automatic facial capture and send it to the BioPass ID API.
We used the Liveness from the Multibiometrics plan.
First, add the Retrofit package. To install the Retrofit
package, add it to the dependencies section of the app/build.gradle
file.
dependencies
implementation 'com.squareup.retrofit2:retrofit:2.5.0'
implementation 'com.squareup.retrofit2:converter-gson:2.5.0'
Additionally, in your AndroidManifest.xml file, add the Internet permission.
<!-- Required to fetch data from the internet. -->
<uses-permission android:name="android.permission.INTERNET" />
LivenessRequest
Create the LivenessRequest data class:
package com.example.facedemo
import com.google.gson.annotations.SerializedName
data class ImageData(
@SerializedName("Image") val image: String
)
data class LivenessRequest(
@SerializedName("Spoof") val spoof: ImageData
)
LivenessResponse
Create the LivenessResponse data class:
package com.example.facedemo
import com.google.gson.annotations.SerializedName
data class LivenessResponse(
@SerializedName("Success") val success: Boolean?,
@SerializedName("result") val result: String?,
@SerializedName("spoof") val spoof: Boolean?
)
BioPassIDApi
Here, you will need an API key to be able to make requests to the BioPass ID API. To get your API key contact us through our website BioPass ID.
Create the BioPassIDApi interface to make requests to the BioPass ID API:
package com.example.facedemo
import retrofit2.Call
import retrofit2.http.Body
import retrofit2.http.Headers
import retrofit2.http.POST
interface BioPassIDApi
@Headers("Content-Type: application/json", "Ocp-Apim-Subscription-Key: your-api-key")
@POST("multibiometrics/v2/liveness")
fun livenessDetection(@Body livenessRequest: LivenessRequest) : Call<LivenessResponse>
Network
Create the Network class to make requests to the BioPass ID API:
package com.example.facedemo
import retrofit2.Retrofit
import retrofit2.converter.gson.GsonConverterFactory
class Network
companion object
/** Returns a Client Retrofit Instance for Requests
*/
fun getRetrofitInstance() : BioPassIDApi
return Retrofit.Builder()
.baseUrl("https://api.biopassid.com/")
.addConverterFactory(GsonConverterFactory.create())
.build()
.create(BioPassIDApi::class.java)
activity_main
In your xml layout of your main activity:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="@+id/btnCapture"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Capture Face"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
MainActivity
In your main activity:
package com.example.facedemo
import android.os.Bundle
import android.util.Log
import android.widget.Button
import androidx.appcompat.app.AppCompatActivity
import br.com.biopassid.facesdk.Face
import br.com.biopassid.facesdk.FaceCallback
import br.com.biopassid.facesdk.config.FaceConfig
import br.com.biopassid.facesdk.model.FaceAttributes
import br.com.biopassid.facesdk.model.FaceImage
import retrofit2.Call
import retrofit2.Callback
import retrofit2.Response
class MainActivity : AppCompatActivity()
private lateinit var btnCapture: Button
override fun onCreate(savedInstanceState: Bundle?)
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
// Button in your xml layout responsible for calling the Face SDK
btnCapture = findViewById(R.id.btnCapture)
// Instantiate FaceConfig with your preferred settings
val config = FaceConfig(licenseKey = "your-license-key")
// Enable Liveness mode
config.liveness.enabled = true
// Define a FaceCallback to receive the image bitmap
val callback = object: FaceCallback
override fun onFaceCapture(
faceImage: FaceImage,
faceAttributes: FaceAttributes? // Only available on Liveness
)
// Instantiate Liveness request
val livenessRequest = LivenessRequest(ImageData(faceImage.imageBase64))
// Get retrofit
val retrofit = Network.getRetrofitInstance()
// Execute request to the BioPass ID API
val callback = retrofit.livenessDetection(livenessRequest)
// Handle API response
callback.enqueue(object : Callback<LivenessResponse>
override fun onResponse(
call: Call<LivenessResponse>,
response: Response<LivenessResponse>
)
Log.d(TAG, "code: $response.code()")
if (response.isSuccessful)
Log.d(TAG, "body: $response.body()")
else
Log.d(TAG, "message: $response.message()")
override fun onFailure(call: Call<LivenessResponse>, t: Throwable)
Log.e(TAG, "Error trying to call liveness.", t)
)
// Only available on Liveness
override fun onFaceDetected(faceAttributes: FaceAttributes)
Log.d(TAG, "onFaceDetected: $faceAttributes")
// Start Face capture
btnCapture.setOnClickListener
Face.takeFace(this, config, callback)
companion object
private const val TAG = "FaceDemo"
Example using face recognition and local DB to save templates
Extract data class
Create the Extract data class:
package com.example.facedemo
data class Extract(var id: Int = -1, var image: ByteArray, var template: ByteArray)
DBHelper
Create the DBHelper class to access local DB and create the extract table:
package com.example.facedemo
import android.content.Context
import android.database.sqlite.SQLiteDatabase
import android.database.sqlite.SQLiteOpenHelper
class DBHelper(context: Context) : SQLiteOpenHelper(context, "extract.db", null, 1)
override fun onCreate(db: SQLiteDatabase?)
val sql = "create table if not exists extract (" +
"id integer primary key autoincrement, " +
"image blob, " +
"template blob)"
db?.execSQL(sql)
override fun onUpgrade(db: SQLiteDatabase?, oldVersion: Int, newVersion: Int)
db?.execSQL("drop table extract")
onCreate(db)
ExtractDAO
Create the ExtractDAO class to handle DB operations:
package com.example.facedemo
import android.annotation.SuppressLint
import android.content.ContentValues
import android.content.Context
class ExtractDAO(context: Context)
private val db: DBHelper
init
db = DBHelper(context)
fun create(extract: Extract)
val contentValues = ContentValues()
contentValues.put("image", extract.image)
contentValues.put("template", extract.template)
db.writableDatabase.insert("extract", null, contentValues)
@SuppressLint("Range")
fun readAll(): ArrayList<Extract>
val extractList = ArrayList<Extract>()
val columns = arrayOf("id", "image", "template")
val cursor = db.readableDatabase.query("extract", columns, null, null, null, null, null)
if (cursor.count > 0)
cursor.moveToFirst()
do
val id = cursor.getInt(cursor.getColumnIndex("id"))
val image = cursor.getBlob(cursor.getColumnIndex("image"))
val template = cursor.getBlob(cursor.getColumnIndex("template"))
extractList.add(Extract(id, image, template))
while (cursor.moveToNext())
cursor.close()
return extractList
@SuppressLint("Range")
fun read(id: Int): Extract?
val extractList = ArrayList<Extract>()
val columns = arrayOf("id", "image", "template")
val cursor = db.readableDatabase.query("extract", columns, null, null, null, null, null)
if (cursor.count > 0)
cursor.moveToFirst()
do
val soredId = cursor.getInt(cursor.getColumnIndex("id"))
val image = cursor.getBlob(cursor.getColumnIndex("image"))
val template = cursor.getBlob(cursor.getColumnIndex("template"))
extractList.add(Extract(soredId, image, template))
while (cursor.moveToNext())
cursor.close()
for (extract in extractList)
if (extract.id == id) return extract
return null
fun update(extract: Extract)
val contentValues = ContentValues()
val where = "id = ?"
val wherep = arrayOf(extract.id.toString())
contentValues.put("image", extract.image)
contentValues.put("template", extract.template)
this.db.writableDatabase.update("extract", contentValues, where, wherep)
fun delete(id: Int)
val where = "id = ?"
val wherep = arrayOf(id.toString())
this.db.writableDatabase.delete("extract", where, wherep)
fun count(): Int
val columns = arrayOf("id")
val cursor = this.db.readableDatabase.query("extract", columns, null, null, null, null, null)
val count = cursor.count
cursor.close()
return count
extract_layout
Create an xml layout to show each item in the extract list:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="100dp"
android:orientation="horizontal">
<ImageView
android:id="@+id/ivExtractImage"
android:layout_width="60dp"
android:layout_height="60dp"
android:layout_marginEnd="10dp"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:orientation="vertical">
<TextView
android:id="@+id/tvExtractId"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="ID"
android:textSize="16sp" />
</LinearLayout>
</LinearLayout>
ExtractListViewAdapter
Create the ExtractListViewAdapter class to handle the extract ListView:
package com.example.facedemo
import android.content.Context
import android.graphics.BitmapFactory
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import android.widget.BaseAdapter
import android.widget.ImageView
import android.widget.TextView
class ExtractListViewAdapter(var context: Context, var extractList: ArrayList<Extract>) : BaseAdapter()
override fun getView(position: Int, convertView: View?, parent: ViewGroup?): View
val view: View = if (convertView != null) convertView
else
val inflate =
context.getSystemService(Context.LAYOUT_INFLATER_SERVICE) as LayoutInflater
inflate.inflate(R.layout.extract_layout, null)
val ivImage = view.findViewById<ImageView>(R.id.ivExtractImage)
val tvId = view.findViewById<TextView>(R.id.tvExtractId)
val extract = extractList[position]
val bitmap = BitmapFactory.decodeByteArray(extract.image, 0, extract.image.size)
ivImage.setImageBitmap(bitmap)
tvId.text = extract.id.toString()
return view
override fun getItem(position: Int): Any
return extractList[position]
override fun getItemId(position: Int): Long
return position.toLong()
override fun getCount(): Int
return extractList.size
activity_main
In your xml layout of your main activity:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".MainActivity">
<FrameLayout
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_weight="1">
<ListView
android:id="@+id/lvExtracts"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</FrameLayout>
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_margin="20dp"
android:orientation="vertical">
<Button
android:id="@+id/btnExtract"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Extract" />
<Button
android:id="@+id/btnVerify"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Verify" />
<Button
android:id="@+id/btnGetAll"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Get all" />
</LinearLayout>
</LinearLayout>
MainActivity
In your main activity:
package com.example.facedemo
import android.database.CursorWindow
import android.graphics.Bitmap
import android.os.Bundle
import android.util.Log
import android.widget.BaseAdapter
import android.widget.Button
import android.widget.ListView
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import br.com.biopassid.facesdk.Face
import br.com.biopassid.facesdk.FaceCallback
import br.com.biopassid.facesdk.FaceRecognition
import br.com.biopassid.facesdk.config.FaceConfig
import br.com.biopassid.facesdk.model.FaceAttributes
import br.com.biopassid.facesdk.model.FaceExtract
import br.com.biopassid.facesdk.model.FaceImage
import br.com.biopassid.facesdk.model.FaceVerify
import java.io.ByteArrayOutputStream
class MainActivity : AppCompatActivity()
private lateinit var lvExtracts: ListView
private lateinit var btnExtract: Button
private lateinit var btnVerify: Button
private lateinit var btnGetAll: Button
private lateinit var extractDAO: ExtractDAO
private lateinit var extracts: ArrayList<Extract>
private lateinit var faceRecognition: FaceRecognition
override fun onDestroy()
super.onDestroy()
faceRecognition.close()
override fun onCreate(savedInstanceState: Bundle?)
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
try
val field = CursorWindow::class.java.getDeclaredField("sCursorWindowSize")
field.isAccessible = true
field.set(null, 100 * 1024 * 1024)
catch (e: Exception)
e.printStackTrace()
lvExtracts = findViewById(R.id.lvExtracts)
btnExtract = findViewById(R.id.btnExtract)
btnVerify = findViewById(R.id.btnVerify)
btnGetAll = findViewById(R.id.btnGetAll)
// Gets an instance of FaceRecognition
faceRecognition = FaceRecognition.getInstance(this, "your-license-key")
// Instantiates an ExtractDAO to handle DB operations
extractDAO = ExtractDAO(this)
// Updates the extract list
extracts = extractDAO.readAll()
// Defines the list view adapter
lvExtracts.adapter = ExtractListViewAdapter(this, extracts)
// Instantiate FaceConfig with your preferred settings
val config = FaceConfig(licenseKey = "your-license-key")
btnExtract.setOnClickListener
val callback = object : FaceCallback
override fun onFaceCapture(faceImage: FaceImage, faceAttributes: FaceAttributes?)
// Extracts the template from the image
faceRecognition.extract(faceImage.image, object : FaceRecognition.ExtractCallback
override fun onExtract(faceExtract: FaceExtract?)
if (faceExtract?.template != null)
// Converts bitmap to byte array
val byteArray = getBitmapAsByteArray(faceImage.image)
// Creates a new extract
val extract =
Extract(image = byteArray, template = faceExtract.template)
// Saves the new extract to the DB
extractDAO.create(extract)
// Updates the extract list
val extractList = extractDAO.readAll()
extracts.clear()
extracts.addAll(extractList)
runOnUiThread
(lvExtracts.adapter as BaseAdapter).notifyDataSetChanged()
)
override fun onFaceDetected(faceAttributes: FaceAttributes)
Log.d(TAG, "onFaceDetected: $faceAttributes")
Face.takeFace(this, config, callback)
btnVerify.setOnClickListener
val callback = object : FaceCallback
override fun onFaceCapture(faceImage: FaceImage, faceAttributes: FaceAttributes?)
// Extracts the template from the image
faceRecognition.extract(faceImage.image, object : FaceRecognition.ExtractCallback
override fun onExtract(faceExtract: FaceExtract?)
if (faceExtract?.template != null)
// Gets all extracts from DB
val extractList = extractDAO.readAll()
// Performs a 1:N search in the extract list
val filteredExtracts = ArrayList<Extract>()
for (extract in extractList)
faceRecognition.verify(faceExtract.template, extract.template,
object : FaceRecognition.VerifyCallback
override fun onVerify(faceVerify: FaceVerify?)
if (faceVerify?.isGenuine == true)
filteredExtracts.add(extract)
if (extract == extractList.last())
// Updates the extract list
if (filteredExtracts.isNotEmpty())
extracts.clear()
extracts.addAll(filteredExtracts)
runOnUiThread
(lvExtracts.adapter as BaseAdapter).notifyDataSetChanged()
else
runOnUiThread
Toast.makeText(
this@MainActivity,
"No matching extract found",
Toast.LENGTH_SHORT
).show()
)
)
override fun onFaceDetected(faceAttributes: FaceAttributes)
Log.d(TAG, "onFaceDetected: $faceAttributes")
Face.takeFace(this, config, callback)
btnGetAll.setOnClickListener getAll()
/** Helper method used to get all extracts from DB and populate the extract list */
private fun getAll()
// Gets all extracts from DB
val extractList = extractDAO.readAll()
// Updates the extract list
extracts.clear()
extracts.addAll(extractList)
(lvExtracts.adapter as BaseAdapter).notifyDataSetChanged()
/** Helper method used to convert bitmap to byte array */
private fun getBitmapAsByteArray(bitmap: Bitmap): ByteArray
val stream = ByteArrayOutputStream()
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream)
val byteArray = stream.toByteArray()
stream.close()
return byteArray
companion object
private const val TAG = "FaceDemo"
4. LicenseKey
First, you will need a license key to use the SDK. To get your license key contact us through our website BioPass ID.
To use Face SDK you need a license key. To set the license key needed is simple as setting another attribute. Simply doing:
// Face Capture
val config = FaceConfig()
config.licenseKey = "your-license-key"
// Face Recognition
faceRecognition = FaceRecognition.getInstance(this, "your-license-key")
5. FaceCallback
You can set a custom callback to receive the captured image. Note: FaceAttributes will only be available if liveness mode is enabled. You can write you own callback following this example:
val callback = object : FaceCallback
override fun onFaceCapture(faceImage: FaceImage, faceAttributes: FaceAttributes?)
Log.d(TAG, "onFaceCapture: $faceImage")
// Only available on Liveness
Log.d(TAG, "onFaceCapture: $faceAttributes")
override fun onFaceDetected(faceAttributes: FaceAttributes)
// Only available on Liveness
Log.d(TAG, "onFaceDetected: $faceAttributes")
FaceImage
| Name | Type | Description |
| ----------- | ------ | -------------------------------------- |
| image | Bitmap | Image captured in Bitmap format |
| imageBase64 | String | Image captured in base64 string format |
FaceAttributes
| Name | Type | Description |
| ----------------------- | ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| faceProp | Float | Proportion of the area occupied by the face in the image, in percentage |
| faceWidth | Int | Face width, in pixels |
| faceHeight | Int | Face height, in pixels |
| ied | Int | Distance between left eye and right eye, in pixels |
| bbox | Rect | Face bounding box |
| rollAngle | Float | The Euler angle X of the head. Indicates the rotation of the face about the axis pointing out of the image. Positive z euler angle is a counter-clockwise rotation within the image plane |
| pitchAngle | Float | The Euler angle X of the head. Indicates the rotation of the face about the horizontal axis of the image. Positive x euler angle is when the face is turned upward in the image that is being processed |
| yawAngle | Float | The Euler angle Y of the head. Indicates the rotation of the face about the vertical axis of the image. Positive y euler angle is when the face is turned towards the right side of the image that is being processed |
| leftEyeOpenProbability | Float | Probability that the face’s left eye is open, in percentage |
| rightEyeOpenProbability | Float | Probability that the face’s right eye is open, in percentage |
| smilingProbability | Float | Probability that the face is smiling, in percentage |
| averageLightIntensity | Float | The average intensity of the pixels in the image |
6. Face detection
The SDK supports 2 types of facial detection, below you can see their description and how to use them:
Basic face detection
A simpler facial detection, supporting face centering and proximity. This is the SDK's default detection. See FaceDetectionOptions to see what settings are available for this functionality. Note: For this functionality to work, liveness mode and continuous capture must be disabled. See below how to use:
val config = FaceConfig()
config.licenseKey = "your-license-key"
config.liveness.enabled = false // liveness mode must be disabled. This is the default value
config.continuousCapture.enabled = false // continuous capture must be disabled. This is the default value
config.faceDetection.enabled = true // face detection must be enable
config.faceDetection.autoCapture = true
config.faceDetection.multipleFacesEnabled = false
config.faceDetection.timeToCapture = 3000 // time in milliseconds
config.faceDetection.maxFaceDetectionTime = 60000 // time in milliseconds
config.faceDetection.scoreThreshold = 0.5f
Liveness face detection
More accurate facial detection supporting more features beyond face centering and proximity. Ideal for those who want to have more control over facial detection. See FaceLivenessDetectionOptions to see what settings are available for this functionality. Note: This feature only works with the front camera. See below how to use:
val config = FaceConfig()
config.licenseKey = "your-license-key"
config.liveness.enabled = true
config.liveness.debug = false
config.liveness.timeToCapture = 3000 // time in milliseconds
config.liveness.maxFaceDetectionTime = 60000 // time in milliseconds
config.liveness.minFaceProp = 0.1f
config.liveness.maxFaceProp = 0.4f
config.liveness.minFaceWidth = 150
config.liveness.minFaceHeight = 150
config.liveness.ied = 90
config.liveness.bboxPad = 20
config.liveness.faceDetectionThresh = 0.5f
config.liveness.rollThresh = 4.0f
config.liveness.pitchThresh = 4.0f
config.liveness.yawThresh = 4.0f
config.liveness.closedEyesThresh = 0.7f
config.liveness.smilingThresh = 0.7f
config.liveness.tooDarkThresh = 50
config.liveness.tooLightThresh = 170
config.liveness.faceCentralizationThresh = 0.05f
7. Continuous capture
You can use continuous shooting to capture multiple frames at once. Additionally, you can set a maximum number of frames to be captured using maxNumberFrames. As well as capture time with timeToCapture. Note: Facial detection does not work with continuous capture. Using continuous capture is as simple as setting another attribute. Simply by doing:
val config = FaceConfig()
config.licenseKey = "your-license-key"
config.liveness.enabled = false // liveness mode must be disabled. This is the default value
config.continuousCapture.enabled = true
config.continuousCapture.timeToCapture = 1000 // capture every frame per second, time in millisecond
config.continuousCapture.maxNumberFrames = 40
8. Face recognition
Extract template
Extract is a funcionality to extract a template from a given biometric image. The operation returns a FaceExtract object which contains the resulting template as a byte array and a status.
val faceRecognition = FaceRecognition.getInstance(context, "your-license-key")
faceRecognition.extract(byteArray, object : FaceExtractCallback
override fun onExtract(faceExtract: FaceExtract?)
Log.d(TAG, "extract: $faceExtract")
)
// Or:
faceRecognition.extract(bitmap, object : FaceExtractCallback
override fun onExtract(faceExtract: FaceExtract?)
Log.d(TAG, "extract: $faceExtract")
)
// Or:
faceRecognition.extract(base64String, object : FaceExtractCallback
override fun onExtract(faceExtract: FaceExtract?)
Log.d(TAG, "extract: $faceExtract")
)
FaceExtract
| Name | Type | Description |
| -------- | --------- | -------------------------------------- |
| status | Int | The resulting status |
| template | ByteArray | The resulting template as a byte array |
Verify templates
Verify is a funcionality to compares two biometric templates. The operation returns a FaceVerify object which contains the resulting score and if the templates are correspondent.
val faceRecognition = FaceRecognition.getInstance(context, "your-license-key")
faceRecognition.verify(byteArray, byteArray, object : FaceVerifyCallback
override fun onVerify(faceVerify: FaceVerify?)
Log.d(TAG, "verify: $faceVerify")
)
FaceVerify
| Name | Type | Description |
| --------- | ------- | ---------------------------------------------------------------------------------------------------- |
| isGenuine | Boolean | Indicates whether the informed biometric templates are correspondent or not |
| score | Float | Indicates the level of similarity between the two given biometrics. Its value may vary from 0 to 100 |
FaceConfig
You can also use pre-build configurations on your application, so you can automatically start using multiples features that better suit your application. You can instantiate each one and use it's default properties, or if you prefer you can change every config available. Here are the types that are supported right now:
FaceConfig
| Name | Type | Default value |
| ------------------ | ---------------------------- | ----------------------------- |
| licenseKey | String | "" |
| resolutionPreset | FaceResolutionPreset | FaceResolutionPreset.VERYHIGH |
| lensDirection | FaceCameraLensDirection | FaceCameraLensDirection.FRONT |
| imageFormat | FaceImageFormat | FaceImageFormat.JPEG |
| flashEnabled | Boolean | false |
| fontFamily | Int | R.font.facesdk_opensans_bold |
| liveness | FaceLivenessDetectionOptions | |
| continuousCapture | FaceContinuousCaptureOptions | |
| faceDetection | FaceDetectionOptions | |
| mask | FaceMaskOptions | |
| titleText | FaceTextOptions | |
| loadingText | FaceTextOptions | |
| helpText | FaceTextOptions | |
| feedbackText | FaceFeedbackTextOptions | |
| backButton | FaceButtonOptions | |
| flashButton | FaceFlashButtonOptions | |
| switchCameraButton | FaceButtonOptions | |
| captureButton | FaceButtonOptions | |
FaceContinuousCaptureOptions
| Name | Type | Default value |
| --------------- | ------- | ---------------------------- |
| enabled | Boolean | false |
| timeToCapture | Long | 1000 // time in milliseconds |
| maxNumberFrames | Int | 20 |
FaceDetectionOptions
| Name | Type | Default value | Description |
| -------------------- | ------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| enabled | Boolean | true | Activates facial detection |
| autoCapture | Boolean | true | Activates automatic capture |
| multipleFacesEnabled | Boolean | false | Allows the capture of photos with two or more faces |
| timeToCapture | Long | 3000 | Time it takes to perform an automatic capture, in miliseconds |
| maxFaceDetectionTime | Long | 40000 | Maximum facial detection attempt time, in miliseconds |
| scoreThreshold | Float | 0.7f | Minimum trust score for a detection to be considered valid. Must be a number between 0 and 1, which 0.1 would be a lower face detection trust level and 0.9 would be a higher trust level |
FaceLivenessDetectionOptions
| Name | Type | Default value | Description |
| ------------------------ | ------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| enabled | Boolean | false | Activates liveness |
| debug | Boolean | false | If activated, a red rectangle will be drawn around the detected faces, in addition, it will be shown in the feedback message which attribute caused an invalid face |
| timeToCapture | Long | 3000 | Time it takes to perform an automatic capture, in miliseconds |
| maxFaceDetectionTime | Long | 60000 | Maximum facial detection attempt time, in miliseconds |
| minFaceProp | Float | 0.1f | Minimum limit of the proportion of the area occupied by the face in the image, in percentage |
| maxFaceProp | Float | 0.4f | Maximum limit on the proportion of the area occupied by the face in the image, in percentage |
| minFaceWidth | Int | 150 | Minimum face width, in pixels |
| minFaceHeight | Int | 150 | Minimum face height, in pixels |
| ied | Int | 90 | Minimum distance between left eye and right eye, in pixels |
| bboxPad | Int | 20 | Padding the face's bounding box to the edges of the image, in pixels |
| faceDetectionThresh | Float | 0.5f | Minimum trust score for a detection to be considered valid. Must be a number between 0 and 1, which 0.1 would be a lower face detection trust level and 0.9 would be a higher trust level |
| rollThresh | Float | 4.0f | The Euler angle X of the head. Indicates the rotation of the face about the axis pointing out of the image. Positive z euler angle is a counter-clockwise rotation within the image plane |
| pitchThresh | Float | 4.0f | The Euler angle X of the head. Indicates the rotation of the face about the horizontal axis of the image. Positive x euler angle is when the face is turned upward in the image that is being processed |
| yawThresh | Float | 4.0f | The Euler angle Y of the head. Indicates the rotation of the face about the vertical axis of the image. Positive y euler angle is when the face is turned towards the right side of the image that is being processed |
| closedEyesThresh | Float | 0.7f | Minimum probability threshold that the left eye and right eye of the face are closed, in percentage. A value less than 0.7 indicates that the eyes are likely closed |
| smilingThresh | Float | 0.7f | Minimum threshold for the probability that the face is smiling, in percentage. A value of 0.7 or more indicates that a person is likely to be smiling |
| tooDarkThresh | Int | 50 | Minimum threshold for the average intensity of the pixels in the image |
| tooLightThresh | Int | 170 | Maximum threshold for the average intensity of the pixels in the image |
| faceCentralizationThresh | Float | 0.05f | Threshold to consider the face centered, in percentage |
FaceMaskOptions
| Name | Type | Default value |
| ----------------- | -------------- | ----------------------------- |
| enabled | Boolean | true |
| type | FaceMaskFormat | FaceMaskFormat.FACE |
| backgroundColor | Int | Color.parseColor("#CC000000") |
| frameColor | Int | Color.WHITE |
| frameEnabledColor | Int | Color.parseColor("#16AC81") |
| frameErrorColor | Int | Color.parseColor("#E25353") |
FaceFeedbackTextOptions
| Name | Type | Default value |
| --------- | ------------------------ | -------------------------- |
| enabled | Boolean | true |
| messages | FaceFeedbackTextMessages | FaceFeedbackTextMessages() |
| textColor | Int | Color.WHITE |
| textSize | Int | 14 |
FaceFeedbackTextMessages
| Name | Type | Default value |
| ------------------- | ------ | ----------------------------- |
| noDetection | String | "No faces detected" |
| multipleFaces | String | "Multiple faces detected" |
| faceCentered | String | "Face centered. Do not move" |
| tooClose | String | "Turn your face away" |
| tooFar | String | "Bring your face closer" |
| tooLeft | String | "Move your face to the right" |
| tooRight | String | "Move your face to the left" |
| tooUp | String | "Move your face down" |
| tooDown | String | "Move your face up" |
| invalidIED | String | "Invalid inter-eye distance" |
| faceAngleMisaligned | String | "Misaligned face angle" |
| closedEyes | String | "Open your eyes" |
| smiling | String | "Do not smile" |
| tooDark | String | "Too dark" |
| tooLight | String | "Too light" |
FaceFlashButtonOptions
| Name | Type | Default value |
| -------------------- | --------------- | ------------- |
| enabled | Boolean | true |
| backgroundColor | Int | Color.WHITE |
| buttonPadding | Int | 0 |
| buttonSize | Size | Size(56, 56) |
| flashOnIconOptions | FaceIconOptions | |
| flashOnLabelOptions | FaceTextOptions | |
| flashOffIconOptions | FaceIconOptions | |
| flashOffLabelOptions | FaceTextOptions | |
FaceButtonOptions
| Name | Type | Default value |
| --------------- | --------------- | ------------- |
| enabled | Boolean | true |
| backgroundColor | Int | Color.WHITE |
| buttonPadding | Int | 0 |
| buttonSize | Size | Size(56, 56) |
| iconOptions | FaceIconOptions | |
| labelOptions | FaceTextOptions | |
FaceIconOptions
| Name | Type | Default value |
| --------- | ------- | --------------------------- |
| enabled | Boolean | true |
| iconFile | Int | R.drawable.facesdk_ic_close |
| iconColor | Int | Color.parseColor("#323232") |
| iconSize | Size | Size(32, 32) |
FaceTextOptions
| Name | Type | Default value |
| --------- | ------- | --------------------------- |
| enabled | Boolean | true |
| content | String | "" |
| textColor | Int | Color.parseColor("#323232") |
| textSize | Int | 14 |
FaceCameraLensDirection (enum)
| Name |
| ----------------------------- |
| FaceCameraLensDirection.FRONT |
| FaceCameraLensDirection.BACK |
FaceImageFormat (enum)
| Name |
| -------------------- |
| FaceImageFormat.JPEG |
| FaceImageFormat.PNG |
FaceMaskFormat (enum)
| Name |
| ---------------------- |
| FaceMaskFormat.FACE |
| FaceMaskFormat.SQUARE |
| FaceMaskFormat.ELLIPSE |
FaceResolutionPreset (enum)
| Name | Resolution |
| ------------------------------ | -------------------------------- |
| FaceResolutionPreset.LOW | 240p (320x240) |
| FaceResolutionPreset.MEDIUM | 480p (720x480) |
| FaceResolutionPreset.HIGH | 720p (1280x720) |
| FaceResolutionPreset.VERYHIGH | 1080p (1920x1080) |
| FaceResolutionPreset.ULTRAHIGH | 2160p (3840x2160) |
| FaceResolutionPreset.MAX | The highest resolution available |
How to change font family
You can use the default font family or set one of your own. To set a font family, create a font folder under res directory. Download the font which ever you want and paste it inside font folder. All font names must be only: lowercase a-z, 0-9, or underscore. The structure should be some thing like below.
Then, just set the font family passing the reference of the font family file.
val config = FaceConfig()
config.licenseKey = "your-license-key"
config.fontFamily = R.font.roboto_mono_bold_italic
How to change icon
You can use the default icons or define one of your own. To set a icon, download the icon which ever you want and paste it inside drawable folder. The structure should be some thing like below.
Then, just set the icon passing the reference of the icon file.
val config = FaceConfig()
config.licenseKey = "your-license-key"
// changing back button icon
config.backButton.iconOptions.iconFile = R.drawable.ic_baseline_photo_camera
// changing switch camera button icon
config.switchCameraButton.iconOptions.iconFile = R.drawable.ic_baseline_photo_camera
// changing capture button icon
config.captureButton.iconOptions.iconFile = R.drawable.ic_baseline_photo_camera
// changing flash button icon
config.flashButton.flashOnIconOptions.iconFile = R.drawable.ic_baseline_photo_camera
config.flashButton.flashOffIconOptions.iconFile = R.drawable.ic_baseline_photo_camera
Changelog
v5.0.2
- Documentation update;
- Fixed SSLHandshakeException on license activation on Android 7.0 (API 24).
v5.0.1
- Documentation update;
- Adjust the position and size of FaceMaskFormat.FACE.
v5.0.0
- Documentation update;
- Added new FaceImage which provides the captured image in Bitmap and base64 string formats;
- Changes to FaceCallback:
- onFaceCapture now returns a FaceImage instead of a Bitmap.
Breaking Changes
FaceCallback
// Before
val callback = object : FaceCallback
override fun onFaceCapture(image: Bitmap, faceAttributes: FaceAttributes?)
override fun onFaceDetected(faceAttributes: FaceAttributes)
// Now
val callback = object : FaceCallback
override fun onFaceCapture(faceImage: FaceImage, faceAttributes: FaceAttributes?)
override fun onFaceDetected(faceAttributes: FaceAttributes)
v4.1.3
- Documentation update.
v4.1.2
- Documentation update.
v4.1.1
- Documentation update;
- Dlib has been removed and wrapped in an external lib to avoid conflicts with other BioPasss ID SDKs. Now the dlib wrapper is a dependency, see prerequisites section.
v4.1.0
- Documentation update.
v4.0.1
- Documentation update;
- Fixed a bug where the maximum detection time was reset even after the SDK was closed.
v4.0.0
- Documentation update;
- All texts in English by default;
- Upgrade from Camera2 to CameraX, see prerequisites section;
- Bug fixes for face centering;
- Changing the name of FaceFeedbackTextMessages properties, see faceconfig section;
- Added new liveness mode for face detection;
- Changes to FaceCallback:
- The onFaceDetected callback has been added, which provides real-time information about the detected face;
- The onFaceCapture callback now, in addition to the captured image, also returns information about the detected face.
- Added new face recognition:
- Extract: Operation to extract a template from a given biometric image;
- Verify: Operation that compares two biometric templates.