How to Use React Native Vision Camera: A Comprehensive Introduction
In the rapidly evolving landscape of mobile app development, accessing and manipulating the device camera is a fundamental requirement for a vast array of applications โ from social media and e-commerce to augmented reality and utility tools. For React Native developers, the choice of a camera library is crucial, impacting performance, features, and the overall developer experience. For years, react-native-camera
was the de facto standard, but it came with its share of challenges, including performance bottlenecks and maintenance issues.
Enter React Native Vision Camera, a powerful, modern, and actively maintained library designed to provide a significantly better camera experience for React Native applications. Built from the ground up with performance and flexibility in mind, it leverages cutting-edge native APIs and modern React Native architecture (including JSI and TurboModules, though full TurboModule support is ongoing) to deliver unparalleled speed and capabilities.
This article serves as a comprehensive introduction to React Native Vision Camera. We will delve deep into its setup, core functionalities, configuration options, and its most compelling feature: Frame Processors. Whether you’re building your first camera-enabled app or migrating from an older library, this guide aims to equip you with the knowledge needed to effectively integrate and utilize Vision Camera.
Table of Contents:
- Why Choose React Native Vision Camera?
- Performance Advantages
- Modern API and Features
- Active Maintenance and Community
- Frame Processors: The Game Changer
- Prerequisites
- React Native Environment Setup
- Basic React Native Knowledge
- Installation and Setup
- Installing the Library
- iOS Configuration (Permissions, Swift Header)
- Android Configuration (Permissions, Gradle Settings)
- Requesting Camera and Microphone Permissions
- Basic Camera View Implementation
- Importing Necessary Components and Hooks
- Selecting a Camera Device (
useCameraDevice
) - Rendering the
<Camera>
Component - Handling Camera Activation (
isActive
Prop) - Managing Permissions with
useCameraPermission
- Capturing Photos
- Using the
useCameraRef
Hook - The
takePhoto
Method - Photo Configuration Options (Flash, Quality, Codec, etc.)
- Handling the Photo Output (Path, Dimensions, Metadata)
- Example: Simple Photo Capture Button
- Using the
- Recording Videos
- The
startRecording
andstopRecording
Methods - Video Configuration Options (Codec, File Type, Bitrate)
- Handling Recording Callbacks (
onRecordingFinished
,onRecordingError
) - Example: Simple Video Recording UI
- The
- Deep Dive into Camera Configuration
- Selecting Specific Devices (Front, Back, External, Ultra-Wide)
- Understanding Camera Formats (
useCameraFormat
) - Filtering Formats (Resolution, Aspect Ratio, Pixel Format)
- Setting Frame Rate (FPS)
- Enabling High Dynamic Range (HDR)
- Low Light Boost
- Controlling Orientation
- Audio Input Selection
- Frame Processors: Real-time Frame Analysis
- What are Frame Processors?
- The
useFrameProcessor
Hook - Understanding the
Frame
Object (Data, Dimensions, Orientation) - The Role of JSI (JavaScript Interface)
- Introducing Worklets and
react-native-reanimated
- Why Worklets are Crucial for Performance
- Example: Simple Frame Logging
- Example: Basic QR/Barcode Scanning (using
vision-camera-code-scanner
) - Example: Basic Face Detection (using
vision-camera-face-detector
) - Performance Considerations for Frame Processors
- Limitations and Best Practices
- Advanced Camera Features
- Zooming (Optical vs. Digital,
zoom
prop) - Focusing (Tap-to-Focus,
focus
method) - Controlling the Torch (
torch
prop) - Taking Snapshots (
takeSnapshot
method)
- Zooming (Optical vs. Digital,
- UI Overlays and Customization
- Positioning Buttons and UI Elements
- Creating Custom Camera Interfaces
- Displaying Frame Processor Results (e.g., Bounding Boxes)
- Error Handling and Lifecycle Management
- The
onError
Prop - Handling Errors from
takePhoto
,startRecording
, etc. - Managing Camera State with
isActive
and Navigation
- The
- Performance Tips and Best Practices
- Use
isActive
diligently. - Choose appropriate Camera Formats.
- Optimize Frame Processors (Worklets are key).
- Be mindful of data transfer between Native and JS.
- Profile your application.
- Use
- Migrating from
react-native-camera
(Brief Overview)- Key API Differences
- Conceptual Shifts (e.g., Frame Processors vs. Text/Barcode Recognition Props)
- Troubleshooting Common Issues
- Black Screen
- Permission Errors
- Build Failures (iOS/Android)
- Frame Processor Lag
- Checking Logs and GitHub Issues
- Conclusion and Future Directions
1. Why Choose React Native Vision Camera?
Before diving into the implementation details, let’s understand why React Native Vision Camera (often abbreviated as RNVisionCamera) has gained significant traction and is often recommended over older alternatives.
Performance Advantages
This is arguably the most significant benefit. Vision Camera is designed with performance as a primary goal.
* Native Performance: It utilizes modern, efficient native camera APIs on both iOS (AVFoundation) and Android (Camera2/CameraX).
* JSI Integration: By leveraging React Native’s JavaScript Interface (JSI), Vision Camera allows for more direct and synchronous communication between JavaScript and the native camera modules, reducing the overhead associated with the traditional asynchronous bridge. This is particularly impactful for features like Frame Processors.
* Optimized Frame Handling: The way frames are processed and made available to JavaScript is highly optimized, minimizing copies and delays.
Modern API and Features
Vision Camera offers a clean, Promise-based, and hook-centric API that feels idiomatic in modern React Native development.
* Hooks: It provides convenient hooks like useCameraDevice
, useCameraFormat
, useCameraPermission
, and useFrameProcessor
, simplifying state management and component logic.
* Comprehensive Configuration: It offers fine-grained control over camera settings, including device selection (ultra-wide, telephoto), specific formats (resolution, FPS, HDR), video stabilization, and more.
* Extensibility: The Frame Processor architecture allows developers to easily plug in real-time analysis features like barcode scanning, face detection, object recognition, and even custom machine learning models.
Active Maintenance and Community
The library is actively developed and maintained by Marc Rousavy (@mrousavy) and a growing community.
* Regular Updates: Bugs are fixed promptly, and new features aligning with native capabilities are added regularly.
* Responsive Support: The maintainer and community are active on GitHub issues and discussions, providing help and addressing concerns.
* Future-Proofing: The library aims to stay current with the latest React Native advancements (like the New Architecture) and native platform features.
Frame Processors: The Game Changer
This feature deserves special mention. Frame Processors allow you to run JavaScript functions (ideally, high-performance “worklets”) synchronously for every frame captured by the camera. This opens up possibilities for real-time applications directly within your React Native codebase:
* Live Filters & Effects
* Real-time QR/Barcode Scanning
* Live Face/Object Detection & Tracking
* Running ML Models on the Camera Feed
* And much more…
This capability, executed efficiently using JSI and worklets, fundamentally changes what’s easily achievable with a camera in React Native.
2. Prerequisites
Before you start integrating Vision Camera, ensure you have the following:
React Native Environment Setup
- A working React Native development environment. Follow the official React Native documentation for setting up the “React Native CLI Quickstart” (not Expo Go, as Vision Camera requires native modules).
- Node.js (LTS version recommended)
- Watchman (recommended for macOS/Linux)
- For iOS development: Xcode and CocoaPods
- For Android development: Java Development Kit (JDK), Android Studio, and the Android SDK
Basic React Native Knowledge
- Familiarity with React concepts (Components, Props, State, Hooks).
- Understanding of basic React Native development (JSX, Styling, Core Components).
- Knowledge of asynchronous JavaScript (Promises, async/await).
3. Installation and Setup
Let’s get Vision Camera installed and configured in your React Native project.
Installing the Library
Navigate to your project’s root directory in your terminal and run:
“`bash
npm install react-native-vision-camera
or
yarn add react-native-vision-camera
“`
Vision Camera uses JSI, which requires native C++ code. React Native’s auto-linking mechanism usually handles the basic linking process. However, additional platform-specific setup is required.
Important: Vision Camera relies heavily on react-native-reanimated
(version 2 or 3) for its high-performance Frame Processors (Worklets). Install it if you haven’t already:
“`bash
npm install react-native-reanimated
or
yarn add react-native-reanimated
“`
Follow the react-native-reanimated
installation guide carefully, especially the steps involving adding the Babel plugin (plugins: ['react-native-reanimated/plugin']
in babel.config.js
– must be last) and native configuration (e.g., MainActivity.java
changes for Android).
iOS Configuration
-
Permissions: You need to declare why your app needs camera and microphone access. Open your
ios/<YourAppName>/Info.plist
file and add the following keys with appropriate descriptions:xml
<key>NSCameraUsageDescription</key>
<string>$(PRODUCT_NAME) needs access to your camera to take photos and videos.</string>
<key>NSMicrophoneUsageDescription</key>
<string>$(PRODUCT_NAME) needs access to your microphone to record audio with videos.</string>
(Replace the string values with descriptions suitable for your app’s functionality.) -
Minimum iOS Version: Vision Camera requires iOS 11 or higher. Ensure your
ios/Podfile
sets the platform version accordingly:“`ruby
platform :ios, ‘11.0’or higher
“`
-
Install Pods: Navigate to the
ios
directory and install the pods:bash
cd ios
pod install
cd .. -
(Optional but Recommended) Swift: If your project doesn’t already use Swift, Vision Camera might require it. Xcode usually prompts you to create a Bridging Header if you add a Swift file. You can simply create an empty Swift file (e.g.,
File.swift
) in Xcode (File > New > File > Swift File) and let Xcode create the bridging header when prompted. This ensures the necessary Swift runtime support is included.
Android Configuration
-
Permissions: Add the required permissions to your
android/app/src/main/AndroidManifest.xml
file, typically inside the<manifest>
tag but outside the<application>
tag:xml
<uses-permission android:name="android.permission.CAMERA" />
<!-- Optional: Only if you need to record audio -->
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<!-- Optional: Only if you need to save photos/videos to storage -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
(Note: For Android 10+ scoped storage might affect howWRITE_EXTERNAL_STORAGE
works. Saving to app-specific directories is generally preferred.) -
Minimum SDK Version: Vision Camera requires a
minSdkVersion
of at least 21. Check yourandroid/build.gradle
file:gradle
buildscript {
ext {
// ... other versions
minSdkVersion = 21 // Or higher
// ...
}
// ...
} -
Enable Kotlin: Vision Camera’s Android side uses Kotlin. Ensure Kotlin is enabled in your project. Add the Kotlin Gradle plugin classpath to your root
android/build.gradle
:gradle
buildscript {
ext {
kotlinVersion = '1.6.21' // Use a compatible Kotlin version
// ...
}
dependencies {
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlinVersion"
// ... other classpaths
}
// ...
}
And apply the Kotlin plugin in yourandroid/app/build.gradle
:gradle
apply plugin: 'kotlin-android' // Add this line near the top -
Gradle Properties (Optional but Recommended): Ensure JSI and potentially Hermes are enabled in
android/gradle.properties
:“`properties
Hermes engine
hermesEnabled=true
JSI (Needed for Vision Camera Frame Processors & Reanimated 2/3)
reactNativeArchitectures=armeabi-v7a,arm64-v8a,x86,x86_64 # Ensure this includes architectures needed for JSI
“`
Note: Enabling Hermes is generally recommended for performance. -
Clean and Rebuild: After making native changes, it’s often necessary to clean and rebuild the Android project:
bash
cd android
./gradlew clean
cd ..
npx react-native run-android
Requesting Camera and Microphone Permissions
While you’ve declared the need for permissions in the native configuration files, you still need to request them from the user at runtime. Vision Camera provides hooks and static methods for this.
“`javascript
import React, { useEffect, useState } from ‘react’;
import { Camera, useCameraPermission } from ‘react-native-vision-camera’;
import { View, Text, Button, Linking } from ‘react-native’;
function PermissionScreen() {
const { hasPermission: hasCameraPermission, requestPermission: requestCameraPermission } = useCameraPermission();
// You might need a separate hook or library for microphone permissions if not using Vision Camera’s combined request implicitly.
// However, Camera.requestCameraPermission() often handles both. Let’s verify or use static methods.
const [hasMicPermission, setHasMicPermission] = useState(false);
useEffect(() => {
// Check initial mic permission status (optional, depends on exact needs)
Camera.getMicrophonePermissionStatus().then(setHasMicPermission);
}, []);
const requestPermissions = async () => {
console.log(‘Requesting camera permission…’);
const cameraStatus = await Camera.requestCameraPermission();
console.log(‘Camera permission status:’, cameraStatus);
console.log('Requesting microphone permission...');
const microphoneStatus = await Camera.requestMicrophonePermission();
console.log('Microphone permission status:', microphoneStatus);
if (cameraStatus === 'granted') {
// Update state or navigate
}
if (microphoneStatus === 'granted') {
setHasMicPermission(true);
// Update state or navigate
}
if (cameraStatus === 'denied' || microphoneStatus === 'denied') {
// Handle denied state - maybe show an explanation and a button to open settings
// On iOS, 'denied' means the user explicitly denied it.
// On Android, it might mean they denied it, possibly permanently ('never_ask_again').
Alert.alert(
'Permissions Required',
'Camera and Microphone access are needed. Please grant permissions in App Settings.',
[
{ text: 'Cancel', style: 'cancel' },
{ text: 'Open Settings', onPress: () => Linking.openSettings() }
]
);
}
};
if (hasCameraPermission && hasMicPermission) {
// Navigate to the main camera screen or render it directly
return
// return
}
return (
{!hasCameraPermission &&
{!hasMicPermission &&
{/ Add message about needing to go to settings if denied /}
);
}
export default PermissionScreen;
“`
Important Permission States:
'granted'
: User has granted permission.'not-determined'
(iOS only): User hasn’t been asked yet.'denied'
: User has explicitly denied permission. On Android, this might also mean “never ask again”.'restricted'
(iOS only): Permission is restricted by parental controls or configuration profiles.
Always handle the denied
and restricted
states gracefully, typically by explaining why the permission is needed and offering a button to open the app settings (Linking.openSettings()
).
4. Basic Camera View Implementation
Once permissions are handled, displaying the camera preview is straightforward.
Importing Necessary Components and Hooks
javascript
import React, { useState, useEffect, useRef } from 'react';
import { View, StyleSheet, Text, ActivityIndicator } from 'react-native';
import { Camera, useCameraDevice, useCameraPermission } from 'react-native-vision-camera';
import { useIsFocused } from '@react-navigation/native'; // If using react-navigation
Selecting a Camera Device (useCameraDevice
)
Vision Camera provides the useCameraDevice
hook to easily select the desired camera (e.g., front or back).
javascript
const device = useCameraDevice('back'); // Or 'front', 'external'
This hook returns a CameraDevice
object or undefined
if no matching device is found or permissions aren’t granted yet.
Rendering the <Camera>
Component
The core component is <Camera>
. It requires at least the device
and isActive
props.
“`javascript
function AppCameraScreen() {
const { hasPermission, requestPermission } = useCameraPermission();
const device = useCameraDevice(‘back’);
const isFocused = useIsFocused(); // From react-navigation
// Or manage isActive state manually if not using react-navigation focus
// const [isActive, setIsActive] = useState(true);
useEffect(() => {
if (!hasPermission) {
requestPermission();
}
}, [hasPermission, requestPermission]);
if (!hasPermission) {
// Render permission request screen or loading state
return (
{/ Or show the PermissionScreen component /}
);
}
if (device == null) {
// Render loading state or fallback UI
return (
{/ You might want to check permissions again here /}
);
}
// Determine if the camera should be active
// Should be active only when the screen is focused and the app is in the foreground
const isCameraActive = isFocused; // Add app state check if needed (AppState from ‘react-native’)
console.log(Rendering Camera: device=${device.id}, isActive=${isCameraActive}, hasPermission=${hasPermission}
);
return (
/>
{/ Add UI Overlays here (buttons, etc.) /}
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: ‘black’, // Background while camera loads/is inactive
},
errorText: {
color: ‘white’,
textAlign: ‘center’,
marginTop: 50,
},
statusText: {
position: ‘absolute’,
bottom: 10,
left: 10,
color: ‘white’,
backgroundColor: ‘rgba(0,0,0,0.5)’,
padding: 5,
borderRadius: 3,
}
});
export default AppCameraScreen;
“`
Handling Camera Activation (isActive
Prop)
The isActive
prop is crucial for performance and battery life.
* true
: The camera session is active, configured, and streaming frames (preview is visible). This consumes significant power.
* false
: The camera session is paused or torn down. The preview stops, and the camera hardware is inactive.
You should typically set isActive
based on:
1. Screen Focus: Use useIsFocused
from @react-navigation/native
if you use that library. Only activate the camera when the screen is focused.
2. App State: Use the AppState
module from react-native
to deactivate the camera when the app goes into the background.
Combining these ensures the camera is only running when truly needed.
“`javascript
import { AppState } from ‘react-native’;
import { useIsFocused } from ‘@react-navigation/native’;
// Inside your component:
const isFocused = useIsFocused();
const [appState, setAppState] = useState(AppState.currentState);
useEffect(() => {
const subscription = AppState.addEventListener(‘change’, nextAppState => {
setAppState(nextAppState);
});
return () => subscription.remove();
}, []);
const isCameraActive = isFocused && appState === ‘active’;
// … then pass isCameraActive to the
“`
Managing Permissions with useCameraPermission
The useCameraPermission
hook simplifies checking and requesting camera permissions, as shown in the basic example above. Remember to handle the case where permission is denied.
5. Capturing Photos
With the camera view rendering, let’s add photo capture functionality.
Using the useCameraRef
Hook
To call methods like takePhoto
on the camera component instance, you need a ref.
“`javascript
import React, { useRef } from ‘react’;
import { Camera } from ‘react-native-vision-camera’;
// Inside your component:
const camera = useRef
// Pass the ref to the Camera component:
//
“`
The takePhoto
Method
The camera.current.takePhoto(options)
method captures a photo. It returns a Promise that resolves with a PhotoFile
object.
“`javascript
const takePicture = async () => {
if (camera.current == null) {
console.error(“Camera ref is null!”);
return;
}
console.log(“Taking photo…”);
try {
const photo = await camera.current.takePhoto({
qualityPrioritization: ‘quality’, // Or ‘speed’, ‘balanced’
flash: ‘off’, // Or ‘on’, ‘auto’
enableShutterSound: true, // Play shutter sound (iOS only)
// enableAutoStabilization: true, // Experimental
// skipMetadata: false // Include metadata like EXIF?
});
console.log("Photo captured:", photo);
// photo object contains: path, width, height, isRawPhoto, metadata, etc.
// Example: Display the photo path
// Alert.alert("Photo Saved", `Path: ${photo.path}`);
// You can now use the photo.path (e.g., display it in an <Image> component)
// Note: The path is temporary. Move or copy it if you need persistent storage.
// Example: `setImageSource({ uri: 'file://' + photo.path })`
} catch (error) {
console.error(“Failed to take photo:”, error);
// Handle errors, e.g., camera busy, permissions issue, storage issue
}
};
“`
Photo Configuration Options
The takePhoto
method accepts an options object:
qualityPrioritization
:'quality'
,'speed'
,'balanced'
. Influences capture time vs image quality.flash
:'on'
,'off'
,'auto'
. Controls the flash during capture.enableShutterSound
:boolean
. Plays the system shutter sound (iOS only).enableAutoStabilization
:boolean
. Tries to enable image stabilization (can vary by device/OS).skipMetadata
:boolean
. Iftrue
, doesn’t include EXIF/metadata, potentially faster.photoCodec
(Android):'jpeg'
or'heic'
. Select the image format. HEIC offers better compression but less compatibility.
Handling the Photo Output
The resolved PhotoFile
object contains:
path
: A temporary file path to the captured image (e.g.,file:///.../photo.jpg
). You must move or copy this file if you need to keep it permanently. Use libraries likereact-native-fs
orreact-native-camera-roll
for this.width
: Width of the photo in pixels.height
: Height of the photo in pixels.isRawPhoto
:boolean
. Indicates if it’s a RAW format photo (if requested and supported).metadata
: An object containing metadata (EXIF, TIFF, GPS if available and enabled).orientation
: The orientation the photo should be viewed in.
Example: Simple Photo Capture Button
“`javascript
import React, { useRef, useState } from ‘react’;
import { View, StyleSheet, Button, Text, Image, Alert } from ‘react-native’;
import { Camera, useCameraDevice, useCameraPermission } from ‘react-native-vision-camera’;
import { useIsFocused } from ‘@react-navigation/native’;
import { AppState } from ‘react-native’; // Import AppState
function CameraWithCapture() {
const { hasPermission, requestPermission } = useCameraPermission();
const device = useCameraDevice(‘back’);
const camera = useRef
const isFocused = useIsFocused();
const [appState, setAppState] = useState(AppState.currentState);
const [photoPath, setPhotoPath] = useState
useEffect(() => {
if (!hasPermission) {
requestPermission();
}
}, [hasPermission, requestPermission]);
useEffect(() => {
const subscription = AppState.addEventListener(‘change’, nextAppState => {
console.log(‘App State changed to:’, nextAppState);
setAppState(nextAppState);
// If app goes to background, photo preview might be cleared or camera deactivated
if (nextAppState !== ‘active’) {
setPhotoPath(null); // Optionally clear preview if going to background
}
});
return () => subscription.remove();
}, []);
const isCameraActive = isFocused && appState === ‘active’;
const takePicture = async () => {
if (camera.current == null) return;
setPhotoPath(null); // Clear previous photo
console.log(“Taking photo…”);
try {
const photo = await camera.current.takePhoto({
flash: ‘off’,
enableShutterSound: true,
});
console.log(“Photo captured:”, photo.path);
// IMPORTANT: The path is temporary. For display, prefix with ‘file://’
setPhotoPath(‘file://’ + photo.path);
// To save permanently, use react-native-fs or react-native-cameraroll
// Example: await CameraRoll.save(file://${photo.path}
, { type: ‘photo’ });
} catch (error) {
console.error(“Failed to take photo:”, error);
Alert.alert(“Capture Error”, “Could not take photo.”);
}
};
if (!hasPermission) return
if (!device) return
// Display photo preview if available, otherwise show camera
if (photoPath) {
return (
);
}
return (
/>
);
}
const styles = StyleSheet.create({
container: { flex: 1, backgroundColor: ‘black’ },
buttonContainer: {
position: ‘absolute’,
bottom: 50,
alignSelf: ‘center’,
},
statusText: { / … same as before … / }
});
export default CameraWithCapture;
“`
6. Recording Videos
Recording video follows a similar pattern using startRecording
and stopRecording
.
The startRecording
and stopRecording
Methods
camera.current.startRecording(options)
: Begins video recording. Requires thevideo={true}
prop on<Camera>
. You must also typically enable audio (audio={true}
) and have microphone permissions.camera.current.stopRecording()
: Stops the ongoing recording. This returns a Promise that resolves when the video file is processed and saved.
Video Configuration Options (startRecording
options)
flash
:'on'
,'off'
,'auto'
. Controls the torch during recording.fileType
:'mov'
(default) or'mp4'
.onRecordingFinished
:(video: VideoFile) => void
. Callback when recording stops successfully. TheVideoFile
object containspath
andduration
.onRecordingError
:(error: CameraCaptureError) => void
. Callback if an error occurs during recording.videoCodec
(Android/iOS):'h264'
(default),'h265'
(HEVC – better compression, potentially less compatible),ProRes
variants (iOS only).videoBitRate
(Android):'high'
,'low'
, or custom number (e.g.,10000000
for 10Mbps). Controls video quality and file size.
Handling Recording Callbacks
Instead of returning a Promise directly, startRecording
uses callbacks defined in its options object.
“`javascript
const [isRecording, setIsRecording] = useState(false);
const [videoPath, setVideoPath] = useState
const startVideoRecording = async () => {
if (camera.current == null || isRecording) return;
console.log(‘Starting recording…’);
setIsRecording(true);
setVideoPath(null); // Clear previous video path
camera.current.startRecording({
flash: ‘off’, // Or ‘on’
fileType: ‘mp4’,
onRecordingFinished: (video) => {
console.log(‘Recording finished:’, video);
setIsRecording(false);
// video object contains path and duration
// IMPORTANT: Path is temporary. Save/move it if needed.
setVideoPath(‘file://’ + video.path);
// Example: await CameraRoll.save(file://${video.path}
, { type: ‘video’ });
},
onRecordingError: (error) => {
console.error(‘Recording error:’, error);
setIsRecording(false);
Alert.alert(“Recording Error”, “Could not record video.”);
},
// Optional: Specify codec or bitrate
// videoCodec: ‘h265’,
// videoBitRate: ‘low’ // for Android
});
console.log(‘Recording started command issued.’);
// Note: Actual recording might start asynchronously shortly after this call.
};
const stopVideoRecording = async () => {
if (camera.current == null || !isRecording) return;
console.log(‘Stopping recording…’);
try {
await camera.current.stopRecording();
// The onRecordingFinished callback handles the result
console.log(‘Stop recording command issued.’);
} catch (error) {
// This catch block might not be necessary if using the callbacks primarily
console.error(“Failed to issue stop recording command:”, error);
setIsRecording(false); // Ensure state is reset on error
}
};
“`
Example: Simple Video Recording UI
“`javascript
// … (Imports similar to photo example: React, useRef, useState, useEffect, View, Button, Text, Alert, Camera, useCameraDevice, useCameraPermission, useIsFocused, AppState)
// … (Permission handling, device selection, isActive logic same as photo example)
function CameraWithVideo() {
// … (hooks: hasPermission, requestPermission, device, camera, isFocused, appState, isCameraActive)
const [isRecording, setIsRecording] = useState(false);
const [videoPath, setVideoPath] = useState
// … (useEffect for permissions and AppState same as before)
const handleRecordButtonPress = () => {
if (isRecording) {
stopVideoRecording();
} else {
startVideoRecording();
}
};
const startVideoRecording = () => { / … as defined above … / };
const stopVideoRecording = async () => { / … as defined above … / };
if (!hasPermission) return
if (!device) return
// Display video player/preview if available (requires a video player library like react-native-video)
if (videoPath) {
return (
{/ Replace with actual Video Player Component /}
Video Recorded! Path: {videoPath}
{/ /}
);
}
return (
/>
{isRecording &&
);
}
const styles = StyleSheet.create({
// … (container, buttonContainer, statusText same as before)
recordingIndicator: {
position: ‘absolute’,
top: 20,
alignSelf: ‘center’,
color: ‘red’,
fontSize: 18,
backgroundColor: ‘rgba(0,0,0,0.5)’,
padding: 8,
borderRadius: 5,
},
});
export default CameraWithVideo;
``
react-native-video`.)*
*(Note: Displaying the recorded video requires an additional library like
7. Deep Dive into Camera Configuration
Vision Camera offers extensive control over the camera’s hardware features.
Selecting Specific Devices (useCameraDevice
)
The useCameraDevice
hook takes the desired position ('back'
, 'front'
, 'external'
) as the first argument. It also accepts an optional second argument, a filter object, to select devices based on criteria:
“`javascript
import { useCameraDevice, PhysicalCameraDeviceType } from ‘react-native-vision-camera’;
// Get the default back camera
const backDevice = useCameraDevice(‘back’);
// Get the default front camera
const frontDevice = useCameraDevice(‘front’);
// Prefer an ultra-wide-angle back camera if available
const ultraWideDevice = useCameraDevice(‘back’, {
physicalDevices: [
‘ultra-wide-angle-camera’, // Primary preference
‘wide-angle-camera’ // Fallback
]
});
// Get all available devices (less common for direct use in
// import { Camera } from ‘react-native-vision-camera’;
// const devices = Camera.getAvailableCameraDevices();
// console.log(devices); // Array of CameraDevice objects
``
physicalDevices
Thearray allows you to specify preferences for cameras like
wide-angle-camera,
ultra-wide-angle-camera, or
telephoto-camera. Check the
CameraDevice` type definition for all possible values.
Understanding Camera Formats (useCameraFormat
)
Each CameraDevice
supports multiple formats
. A format defines a specific configuration for:
* Resolution (photo and video)
* Frame Rate (FPS) ranges
* Video stabilization modes
* HDR support (photo and video)
* Pixel formats (e.g., NV21
, YUV_420_888
, PRIVATE
)
By default, Vision Camera picks a sensible default format. However, for specific needs (e.g., highest resolution, specific FPS for slow-motion, compatibility with Frame Processors), you need to select a format explicitly.
The useCameraFormat
hook helps you find the best format matching your criteria for a given device.
“`javascript
import { useCameraDevice, useCameraFormat } from ‘react-native-vision-camera’;
const device = useCameraDevice(‘back’);
// Example: Find a format that supports 60 FPS
const desiredFps = 60;
const format60fps = useCameraFormat(device, [ // Filter criteria array
{ videoFps: desiredFps }, // Target 60 FPS for video
{ photoAspectRatio: 16/9 }, // Optional: Target a specific aspect ratio
{ videoStabilizationMode: ‘cinematic-extended’ } // Optional: Prefer specific stabilization
]);
// Example: Find the format with the highest photo resolution
const maxResFormat = useCameraFormat(device, [
{ photoResolution: ‘max’ } // Sort by photo resolution descending
]);
// Example: Find a format supporting 10-bit HDR video
const hdrFormat = useCameraFormat(device, [
{ isHighestPhotoQuality: undefined }, // Don’t prioritize photo quality here
{ supportsVideoHDR: true },
{ videoResolution: { width: 1920, height: 1080 } } // Optional: specific resolution
]);
console.log(“Selected 60 FPS format:”, format60fps);
console.log(“Selected Max Res format:”, maxResFormat);
console.log(“Selected HDR format:”, hdrFormat);
// Then pass the selected format to the
//
“`
Filtering Formats
The useCameraFormat
hook takes the CameraDevice
and an array of filter objects. It iterates through the device’s available formats and finds the best match based on these filters, applied in order. Common filter properties include:
photoResolution
:{ width: number, height: number }
or'max'
videoResolution
:{ width: number, height: number }
or'max'
photoAspectRatio
:number
(e.g.,16/9
,4/3
)videoAspectRatio
:number
videoFps
:number
videoStabilizationMode
:'off'
,'standard'
,'cinematic'
,'cinematic-extended'
supportsPhotoHDR
:boolean
supportsVideoHDR
:boolean
pixelFormat
:'yuv'
,'rgb'
,'native'
(for Frame Processors)isHighestPhotoQuality
:boolean
(sorts by internal quality ranking)
Consult the Vision Camera documentation for the full list of FormatFilter
properties. Choosing the right format is crucial for balancing features, performance, and compatibility (especially with Frame Processors).
Setting Frame Rate (FPS)
While a format defines supported FPS ranges, you can specify the desired FPS using the fps
prop on the <Camera>
component. This value must fall within the range supported by the selected format
.
“`javascript
const device = useCameraDevice(‘back’);
// Find a format that CAN support 60 FPS
const format = useCameraFormat(device, [{ videoFps: 60 }]);
// If a suitable format is found:
``
fps`, it usually defaults to 30 or the format’s maximum if lower.
If you don't specify
Enabling High Dynamic Range (HDR)
To enable HDR for photo or video capture, you first need to select a format
that supports it (supportsPhotoHDR: true
or supportsVideoHDR: true
). Then, enable it using props on the <Camera>
component:
“`javascript
const device = useCameraDevice(‘back’);
// Find a format supporting HDR video
const hdrFormat = useCameraFormat(device, [{ supportsVideoHDR: true }]);
“`
Note that HDR support and capabilities vary significantly between devices and platforms.
Low Light Boost
Some devices offer a low-light enhancement mode. You can try enabling it with the lowLightBoost
prop:
javascript
<Camera
device={device}
format={format}
lowLightBoost={device.supportsLowLightBoost} // Enable if device supports it
isActive={isCameraActive}
// ...
/>
Check device.supportsLowLightBoost
before setting the prop.
Controlling Orientation
The orientation
prop locks the camera’s output orientation, regardless of the device’s physical orientation. This affects the preview, photos, and videos.
javascript
<Camera
device={device}
format={format}
orientation="portrait" // Or 'landscape-left', 'landscape-right', 'portrait-upside-down'
isActive={isCameraActive}
// ...
/>
This is useful if your app UI is locked to a specific orientation. If not set, the output orientation usually follows the device’s orientation.
Audio Input Selection
If multiple microphones are available (e.g., built-in, headphones), you might be able to select one, although this is less commonly exposed directly via Vision Camera props compared to device selection. Generally, ensure the audio={true}
prop is set when needed for video recording. Specific microphone selection might require native module customization or might be handled automatically by the OS.
8. Frame Processors: Real-time Frame Analysis
Frame Processors are Vision Camera’s most powerful and distinguishing feature, enabling real-time video frame analysis directly in JavaScript.
What are Frame Processors?
A Frame Processor is a JavaScript function that you provide to the <Camera>
component. Vision Camera will execute this function for every single frame captured by the camera, passing a Frame
object as an argument. This allows you to inspect or analyze the image data in real-time.
The useFrameProcessor
Hook
You define your frame processing logic within a function and pass it to the useFrameProcessor
hook. This hook takes care of registering the function with the camera view.
“`javascript
import { useFrameProcessor, Frame } from ‘react-native-vision-camera’;
import { runOnJS } from ‘react-native-reanimated’; // To call back to React state
// Inside your component:
const frameProcessor = useFrameProcessor((frame: Frame) => {
‘worklet’; // MUST include this directive for performance!
// Access frame data (read-only recommended in worklet)
const width = frame.width;
const height = frame.height;
const timestamp = frame.timestamp; // Presentation timestamp in nanoseconds
const orientation = frame.orientation; // ‘portrait’, ‘landscape-left’, etc.
const pixelFormat = frame.pixelFormat; // e.g., ‘native’
// IMPORTANT: Heavy computation here will block the Camera pipeline!
// Keep worklet logic extremely fast or delegate to native modules/plugins.
console.log(Frame: ${width}x${height} @ ${timestamp}, Orientation: ${orientation}, Format: ${pixelFormat}
);
// Example: Calling back to React state (use runOnJS)
// Only do this sparingly, as it crosses the bridge.
// if (someCondition) {
// runOnJS(setDetectedObject)({ / object details / });
// }
}, [/ dependencies array for the worklet /]);
// Then pass the processor to the Camera component:
//
“`
Understanding the Frame
Object
The Frame
object passed to your processor function contains metadata about the current frame:
width
,height
: Dimensions in pixels.bytesPerRow
: The number of bytes in a single row of the image buffer.planes
: Information about color planes (for YUV formats).timestamp
: Presentation timestamp (nanoseconds).orientation
: Physical orientation of the frame data relative to the sensor.isMirrored
: Whether the frame is mirrored (common for front cameras).pixelFormat
: The format of the pixel data ('native'
,'yuv'
,'rgb'
).frameId
: A unique identifier for the frame.toArrayBuffer()
: A method to get the frame’s pixel data as anArrayBuffer
. Use with extreme caution! Accessing the buffer directly in JS is often slow and can block the UI thread if not handled carefully within a worklet or offloaded. Most common use cases (QR scanning, face detection) rely on specialized plugins that process the frame natively.
The Role of JSI (JavaScript Interface)
Frame Processors are efficient because of JSI. Unlike the old asynchronous bridge, JSI allows synchronous, direct calls between C++ (where the camera frames live) and JavaScript. Vision Camera uses JSI to invoke your JavaScript frame processor function directly from the native camera thread with minimal overhead.
Introducing Worklets and react-native-reanimated
While JSI makes the call fast, executing complex JavaScript code for every frame (potentially 30-60 times per second) on the main JS thread would still freeze your UI.
This is where Worklets from react-native-reanimated
(v2/v3) come in. A worklet is a special type of JavaScript function marked with the 'worklet';
directive. Reanimated compiles these worklets so they can run on a separate, dedicated JavaScript thread (the “UI thread” in Reanimated’s context, but separate from the main React Native JS thread).
Crucially, Vision Camera executes frame processors declared as worklets synchronously on the same thread that delivers the camera frames (a high-priority background thread), bypassing the main React Native JS thread entirely. This prevents UI freezes and allows for high-performance real-time processing.
Therefore, almost all non-trivial Frame Processors MUST be declared as Worklets using 'worklet';
.
Why Worklets are Crucial for Performance
- Avoids Blocking Main JS Thread: Prevents UI freezes and keeps your app responsive.
- Synchronous Execution: Runs directly when the frame is available, minimizing latency.
- Shared Memory (Conceptually): Worklets can efficiently access native data (like the
Frame
object) passed via JSI without expensive serialization across bridges.
Example: Simple Frame Logging (Worklet)
``javascript
Worklet – Frame: ${frame.width}x${frame.height} @ ${frame.timestamp}`);
// Inside your component:
const frameProcessor = useFrameProcessor((frame: Frame) => {
'worklet'; // <<< IMPORTANT!
// Logging directly inside a worklet is possible but can impact performance.
// Use sparingly for debugging.
console.log(
}, []);
“`
Example: Basic QR/Barcode Scanning
Frame Processors are ideal for integrating native ML/vision libraries. Several plugins exist for common tasks.
1. Install the Plugin:
“`bash
npm install vision-camera-code-scanner
or
yarn add vision-camera-code-scanner
cd ios && pod install && cd .. # Install native dependencies
“`
2. Use the Scanner in a Frame Processor:
“`javascript
import React, { useState } from ‘react’;
import { View, Text, StyleSheet, Alert } from ‘react-native’;
import { Camera, useCameraDevice, useFrameProcessor } from ‘react-native-vision-camera’;
// Import scanner hook and types
import { useScanBarcodes, BarcodeFormat, Barcode } from ‘vision-camera-code-scanner’;
import { runOnJS } from ‘react-native-reanimated’;
function QRCodeScannerScreen() {
const device = useCameraDevice(‘back’);
const [barcodeData, setBarcodeData] = useState
// Hook from the scanner plugin
// It returns the frame processor function setup for scanning
const [scanBarcodes, barcodes] = useScanBarcodes(
[BarcodeFormat.QR_CODE, BarcodeFormat.CODE_128], // Specify formats to scan for
{ checkInverted: true } // Options
);
// Function to be called safely from the worklet using runOnJS
const handleBarcodeDetected = (detectedBarcodes: Barcode[]) => {
if (detectedBarcodes.length > 0) {
console.log(‘Detected Barcodes (JS Thread):’, detectedBarcodes);
// Update React state – only update if data changes to avoid excessive re-renders
const currentData = JSON.stringify(barcodeData);
const newData = JSON.stringify(detectedBarcodes);
if(currentData !== newData) {
setBarcodeData(detectedBarcodes);
// Optionally, show an Alert or navigate
// Alert.alert(‘Barcode Scanned’, detectedBarcodes[0].displayValue || ‘No display value’);
}
} else {
// If no barcodes are detected in this frame, maybe clear the state
if (barcodeData.length > 0) {
setBarcodeData([]);
}
}
};
// Frame processor that runs the barcode scanning
const frameProcessor = useFrameProcessor((frame: Frame) => {
‘worklet’;
// Call the native scanning function provided by the hook
const detectedBarcodes = scanBarcodes(frame);
// Use runOnJS to update React state on the main JS thread
runOnJS(handleBarcodeDetected)(detectedBarcodes);
}, [scanBarcodes, barcodeData]); // Add barcodeData to dependencies if logic inside depends on it
if (!device) return
return (
{barcodeData.length > 0 ? (
barcodeData.map((barcode, idx) => (
Format: {barcode.format}, Value: {barcode.displayValue}
))
) : (
)}
{/ You could add bounding boxes here based on barcode.cornerPoints /}
);
}
const styles = StyleSheet.create({
container: { flex: 1, backgroundColor: ‘black’ },
barcodeResultContainer: {
position: ‘absolute’,
bottom: 20,
left: 20,
right: 20,
backgroundColor: ‘rgba(255, 255, 255, 0.8)’,
padding: 10,
borderRadius: 5,
},
barcodeText: {
fontSize: 14,
color: ‘black’,
},
});
export default QRCodeScannerScreen;
“`
Key Points for Plugin Integration:
- Install Plugin: Add the specific Frame Processor plugin (
vision-camera-code-scanner
,vision-camera-face-detector
, etc.). - Import Hooks/Functions: Import the necessary functions or hooks provided by the plugin.
- Call Plugin Logic in Worklet: Inside your
useFrameProcessor
worklet, call the function provided by the plugin (e.g.,scanBarcodes(frame)
). This function typically performs the heavy lifting natively. - Handle Results: Use
runOnJS
to safely pass results from the worklet back to your React component’s state or functions on the main JS thread. - Check
pixelFormat
: Frame processor plugins might require a specificpixelFormat
prop on the<Camera>
component (e.g.,'yuv'
,'rgb'
, or'native'
). Check the plugin’s documentation. - Control
frameProcessorFps
: If real-time processing isn’t needed at the full camera frame rate, setframeProcessorFps
(e.g.,5
) on the<Camera>
to reduce CPU load. The processor will only be called at approximately this rate.
Example: Basic Face Detection
Similar to barcode scanning, you can use plugins for face detection.
1. Install Plugin:
“`bash
npm install vision-camera-face-detector
or
yarn add vision-camera-face-detector
cd ios && pod install && cd ..
“`
(Note: Face detection plugins might require additional setup like downloading ML models, e.g., via ML Kit on Android. Check plugin docs.)
2. Use Face Detector in Frame Processor:
“`javascript
import React, { useState } from ‘react’;
import { View, Text, StyleSheet } from ‘react-native’;
import { Camera, useCameraDevice, useFrameProcessor, Frame } from ‘react-native-vision-camera’;
// Import face detector hook and types
import { useFaceDetector, Face } from ‘vision-camera-face-detector’;
import { runOnJS } from ‘react-native-reanimated’;
function FaceDetectorScreen() {
const device = useCameraDevice(‘front’); // Use front camera for faces
const [faces, setFaces] = useState
// Hook from the face detector plugin
// Configure detector options (performance mode, classifications, landmarks)
const faceDetector = useFaceDetector({
performanceMode: ‘fast’, // Or ‘accurate’
classificationMode: ‘all’, // Detect smiling, eyes open
// landmarkMode: ‘all’, // Detect facial landmarks (eyes, mouth, etc.) – can be heavy
});
const handleFacesDetected = (detectedFaces: Face[]) => {
// console.log(‘Detected Faces (JS Thread):’, detectedFaces);
// Avoid stringify in production for performance if possible
const currentData = JSON.stringify(faces.map(f => f.bounds)); // Compare bounds for simplicity
const newData = JSON.stringify(detectedFaces.map(f => f.bounds));
if (currentData !== newData) {
setFaces(detectedFaces);
} else if (detectedFaces.length === 0 && faces.length > 0) {
// Clear faces if none detected and previously had some
setFaces([]);
}
};
// Frame processor that runs face detection
const frameProcessor = useFrameProcessor((frame: Frame) => {
‘worklet’;
const detectedFaces = faceDetector.scanFaces(frame);
runOnJS(handleFacesDetected)(detectedFaces);
}, [faceDetector, faces]); // Dependency array
if (!device) return
return (
{/ Draw bounding boxes or display face info /}
{faces.map((face, index) => (
{/ Display info like smile probability /}
{face.smilingProbability != null && face.smilingProbability > 0.7 &&
{face.leftEyeOpenProbability != null && face.leftEyeOpenProbability < 0.3 && face.rightEyeOpenProbability != null && face.rightEyeOpenProbability < 0.3 &&
))}
{faces.length > 0 &&
);
}
const styles = StyleSheet.create({
container: { flex: 1, backgroundColor: ‘black’ },
faceBounds: {
position: ‘absolute’,
borderWidth: 2,
borderColor: ‘lime’,
// Transformations needed here! The below is illustrative only.
},
faceText: {
color: ‘lime’,
fontSize: 10,
position: ‘absolute’,
bottom: -15, // Position relative to the box
},
faceCount: {
position: ‘absolute’,
top: 20,
alignSelf: ‘center’,
color: ‘white’,
fontSize: 16,
backgroundColor: ‘rgba(0,0,0,0.6)’,
padding: 8,
borderRadius: 5,
}
});
export default FaceDetectorScreen;
“`
(Coordinate transformation for bounding boxes is non-trivial and depends on frame orientation vs. view orientation. Libraries or helper functions might be needed for accurate drawing.)
Performance Considerations for Frame Processors
- Use Worklets: Always use
'worklet';
. - Keep Worklets Fast: Avoid complex calculations, heavy loops, or large data manipulation directly inside the worklet JS code.
- Delegate to Native: For intensive tasks (ML inference, complex image processing), use dedicated plugins that perform the work natively. The worklet should only call the native function and handle the result.
- Use
runOnJS
Sparingly: Calling back to the main JS thread viarunOnJS
has overhead. Only do it when necessary (e.g., updating UI state) and batch updates if possible. Avoid calling it on every single frame if the data doesn’t change frequently. - Throttle with
frameProcessorFps
: If you don’t need analysis on every single frame, reduce the processing frequency using theframeProcessorFps
prop on<Camera>
. - Choose Efficient
pixelFormat
: Select thepixelFormat
required by your plugin.'native'
or'yuv'
are often efficient choices.'rgb'
might involve color space conversions, potentially adding overhead. - Beware
toArrayBuffer()
: Directly accessing the pixel buffer viaframe.toArrayBuffer()
and processing it in JS (even in a worklet) can be slow due to data copying or marshalling. Native plugins are usually much faster for buffer access.
Limitations and Best Practices
- Frame Processors run on a high-priority thread; blocking them can stall the camera.
- Directly manipulating the
Frame
buffer from JS is discouraged for performance reasons. - Debugging worklets can be challenging. Use
console.log
sparingly or rely onrunOnJS
to pass debug info back to the main thread. - Ensure
react-native-reanimated
is correctly installed and configured.
9. Advanced Camera Features
Beyond basic capture and frame processing, Vision Camera offers control over other hardware features.
Zooming
Vision Camera supports both optical (using telephoto lenses if available) and digital zoom.
-
zoom
Prop: You can set the zoom level directly as a prop on the<Camera>
component. The value is a factor (e.g.,1
= no zoom,2
= 2x zoom).“`javascript
const [zoomLevel, setZoomLevel] = useState(1);
const device = useCameraDevice(‘back’);// Check max zoom level supported by the device/format
const maxZoom = device?.maxZoom ?? 1;
// minZoom is usually device?.minZoom ?? 1// UI element (e.g., Slider) to control zoomLevel state
“` -
enableZoomGesture
Prop: Setting this totrue
enables a built-in pinch-to-zoom gesture handler that automatically adjusts the zoom level. -
Device Properties: The
CameraDevice
object provides information about zoom capabilities:minZoom
: Minimum zoom factor (usually 1).maxZoom
: Maximum zoom factor (can be high, e.g., 10, 50, or more, often including digital zoom).neutralZoom
: The zoom factor where the camera uses its default lens without digital upscaling (often 1, but could be different on multi-lens devices).
Focusing
-
Tap-to-Focus: You can implement tap-to-focus by capturing touch events on the camera view and calling the
focus
method on the camera ref.“`javascript
import { TouchableOpacity, StyleSheet } from ‘react-native’; // Or Pressableconst camera = useRef
(null); const onFocusTap = async (event: GestureResponderEvent) => { // Use GestureResponderEvent type
if (camera.current && device?.supportsFocus) {
try {
await camera.current.focus({
x: event.nativeEvent.locationX,
y: event.nativeEvent.locationY,
});
console.log(‘Focus successful’);
} catch (error) {
console.error(‘Focus failed:’, error);
}
}
};return (
{/ Use Pressable for better feedback and event handling /}
{/ Other UI elements /}
);
``
device.supportsFocus
Make sure theproperty is
truebefore attempting to call
focus`. -
focus
Method:camera.current.focus({ x: number, y: number })
triggers autofocus at the specified point in the camera view’s coordinate system.
Controlling the Torch
The torch (camera flash used as a continuous light) can be controlled via the torch
prop.
“`javascript
const [torchOn, setTorchOn] = useState(false);
// Check if the device has a torch
const hasTorch = device?.hasTorch ?? false;
// Button to toggle torch state
//
``
device.hasTorch`) before trying to enable it.
Ensure the device actually has a torch (
Taking Snapshots (takeSnapshot
method)
Distinct from takePhoto
, takeSnapshot
captures the current content of the camera’s preview view, including any ongoing effects or drawings from Frame Processors if they modify the preview buffer directly (less common). It’s generally faster than takePhoto
but might have lower quality or different processing applied.
javascript
const takePreviewSnapshot = async () => {
if (camera.current == null) return;
console.log("Taking snapshot...");
try {
const snapshot = await camera.current.takeSnapshot({
quality: 85, // 0-100 quality percentage
skipMetadata: true, // Snapshots usually don't have full metadata
});
console.log("Snapshot captured:", snapshot);
// snapshot object contains: path, width, height
// Use 'file://' + snapshot.path for display
} catch (error) {
console.error("Failed to take snapshot:", error);
}
};
10. UI Overlays and Customization
Creating a custom camera UI involves overlaying standard React Native components (Buttons, Views, Icons) on top of the <Camera>
view.
Positioning Buttons and UI Elements
Use absolute positioning to place controls over the camera preview.
“`javascript
import { View, TouchableOpacity, Text, StyleSheet } from ‘react-native’;
import Icon from ‘react-native-vector-icons/MaterialCommunityIcons’; // Example icon library
function CustomCameraUI({ onCapturePress, onSwitchCameraPress }) {
return (
{/ Top Controls /}
{/ Add buttons for flash, settings, etc. /}
{/* Bottom Controls */}
<View style={styles.bottomControls}>
<TouchableOpacity style={styles.controlButton} onPress={onSwitchCameraPress}>
<Icon name="camera-switch-outline" size={30} color="white" />
</TouchableOpacity>
<TouchableOpacity style={styles.captureButton} onPress={onCapturePress}>
{/* Outer circle */}
<View style={styles.captureButtonInner} /> {/* Inner circle */}
</TouchableOpacity>
{/* Placeholder for gallery preview or other controls */}
<View style={styles.controlButton} />
</View>
</View>
);
}
const styles = StyleSheet.create({
overlayContainer: {
position: ‘absolute’,
top: 0,
left: 0,
right: 0,
bottom: 0,
justifyContent: ‘space-between’, // Pushes controls top/bottom
},
topControls: {
paddingTop: 40, // Adjust for status bar/notches
paddingHorizontal: 20,
flexDirection: ‘row’,
justifyContent: ‘space-between’,
},
bottomControls: {
paddingBottom: 50, // Safe area padding
paddingHorizontal: 30,
flexDirection: ‘row’,
justifyContent: ‘space-between’,
alignItems: ‘center’,
},
controlButton: {
width: 50, // Example size
height: 50, // Example size
justifyContent: ‘center’,
alignItems: ‘center’,
// backgroundColor: ‘rgba(0,0,0,0.3)’, // Optional background
// borderRadius: 25,
},
captureButton: {
width: 70,
height: 70,
borderRadius: 35,
backgroundColor: ‘rgba(255, 255, 255, 0.4)’,
justifyContent: ‘center’,
alignItems: ‘center’,
borderWidth: 3,
borderColor: ‘white’,
},
captureButtonInner: {
width: 55,
height: 55,
borderRadius: 27.5,
backgroundColor: ‘white’,
},
});
// In your main camera screen component:
//
//
//
//
“`
Creating Custom Camera Interfaces
Combine various controls (buttons for capture, video toggle, flash mode, camera switch, gallery access), indicators (recording status, zoom level), and potentially gesture responders (PanResponder
, GestureDetector
from react-native-gesture-handler
) to build a fully custom camera experience tailored to your app’s needs.
Displaying Frame Processor Results
If your Frame Processor detects objects, faces, or barcodes, you can draw overlays (like bounding boxes) on top of the camera view.
- Get Data: Use
runOnJS
to pass detection results (coordinates, dimensions, classification) back to your React component’s state. - Coordinate Transformation: This is the trickiest part. The coordinates returned by the Frame Processor (e.g.,
face.bounds
) are relative to theFrame
‘s coordinate system. You need to translate these coordinates to the coordinate system of your overlayView
which sits on top of the<Camera>
component. This involves considering:- The
Frame
dimensions (frame.width
,frame.height
). - The
Frame
orientation (frame.orientation
). - The
<Camera>
component’s layout dimensions on screen. - The screen orientation.
- Whether the frame is mirrored (
frame.isMirrored
). - The
resizeMode
of the camera preview (if applicable, though usually ‘cover’). - Vision Camera provides some utilities or examples for this, but it often requires careful calculation. Look for helper functions in the documentation or community examples.
- The
- Render Overlays: Use absolutely positioned
<View>
elements with borders or other styling to draw the bounding boxes based on the transformed coordinates.
11. Error Handling and Lifecycle Management
Robust camera integration requires proper error handling and managing the camera’s lifecycle.
The onError
Prop
The <Camera>
component has an onError
prop, which is a callback function triggered for unexpected, asynchronous errors within the camera session (e.g., hardware issues, configuration conflicts, runtime errors).
``javascript
${error.message} (Code: ${error.code})`);
const handleCameraError = (error: CameraRuntimeError) => {
console.error("Camera Runtime Error:", error.message, error.code, error.cause);
// Example: Show an alert or fallback UI
Alert.alert("Camera Error",
// Potentially try to re-initialize the camera or navigate away
};
``
CameraRuntimeError` type for specific error codes and potential causes.
Check the
Handling Errors from Methods
Methods like takePhoto
, startRecording
, stopRecording
, and focus
are asynchronous and return Promises. Always wrap calls to these methods in try...catch
blocks or use .catch()
on the Promise chain to handle potential rejections.
javascript
try {
const photo = await camera.current.takePhoto();
// ... process photo
} catch (error) {
// Check if error is a CameraCaptureError or other type
if (error instanceof CameraCaptureError) {
console.error(`Capture Error (${error.code}):`, error.message, error.cause);
// Handle specific codes: 'capture/device-busy', 'capture/file-io-error', etc.
} else {
console.error("Generic capture error:", error);
}
Alert.alert("Capture Failed", "Could not take photo. Please try again.");
}
Managing Camera State with isActive
and Navigation
As emphasized before, correctly managing the isActive
prop is vital.
* Navigation: Use focus listeners (like useIsFocused
) to set isActive={true}
only when the camera screen is visible. Set it to false
when navigating away.
* App State: Use the AppState
listener to set isActive={false}
when the app goes to the background ('background'
or 'inactive'
) and back to true
(if the screen is also focused) when it becomes 'active'
.
Failure to manage isActive
properly leads to unnecessary battery drain and potential resource conflicts.
12. Performance Tips and Best Practices
isActive
is Key: Only keep the camera active (isActive={true}
) when absolutely necessary (screen focused, app in foreground).- Choose Formats Wisely:
- Don’t request the absolute maximum resolution/FPS format unless required. Lower resolutions/FPS consume less power and bandwidth.
- Select formats compatible with your Frame Processors (check plugin docs for required
pixelFormat
).
- Optimize Frame Processors:
- Use Worklets (
'worklet';
). - Keep worklet logic minimal; delegate heavy tasks to native plugins.
- Throttle frequency with
frameProcessorFps
if full frame rate processing isn’t needed. - Minimize data transfer back to the main JS thread (
runOnJS
).
- Use Worklets (
- Efficient Data Handling:
- Move temporary photo/video files (
photo.path
,video.path
) to permanent storage quickly if needed, rather than holding onto temporary paths. - Avoid processing large raw buffers (
toArrayBuffer()
) in JS if possible.
- Move temporary photo/video files (
- Native Modules for Heavy Lifting: For custom, complex image analysis beyond existing plugins, consider writing your own native module (or TurboModule) that can be called from a Frame Processor worklet.
- Profile: Use React Native’s performance monitoring tools (Flipper, built-in profiler) and native platform tools (Xcode Instruments, Android Studio Profiler) to identify bottlenecks in your camera implementation, especially around Frame Processors.
- Test on Devices: Performance varies greatly between devices. Test on a range of target Android and iOS devices.
13. Migrating from react-native-camera
(Brief Overview)
If you’re migrating from the older react-native-camera
library, here are some key differences:
- API: Vision Camera uses a hook-based API (
useCameraDevice
,useCameraFormat
, etc.) and a single<Camera>
component, whereasreact-native-camera
used props directly on its<RNCamera>
component for many configurations and methods accessed via ref. - Performance: Vision Camera is generally significantly faster due to JSI and optimizations.
- Frame Processors: This is the major conceptual shift. Real-time analysis in Vision Camera is done via Frame Processors and worklets, offering more power and flexibility than
react-native-camera
‘s built-in (and often slower) text/barcode/face recognition props (onTextRecognized
,onBarCodeRead
,onFacesDetected
). You’ll need to replace these with Frame Processor plugins. - Configuration: Format selection (
useCameraFormat
) provides more granular control thanreact-native-camera
‘s resolution/preset props. - Maintenance: Vision Camera is actively maintained, while
react-native-camera
‘s maintenance status has been less consistent.
Migration involves replacing the <RNCamera>
component, adapting the prop usage (using hooks and specific props like device
, format
, fps
), rewriting capture logic to use the new async methods (takePhoto
, start/stopRecording
), and implementing real-time analysis features using Frame Processors and appropriate plugins.
14. Troubleshooting Common Issues
- Black Screen:
- Permissions: Ensure Camera permissions are granted. Check
Info.plist
/AndroidManifest.xml
AND runtime request logic. isActive
Prop: VerifyisActive
istrue
when the screen is focused and the app is active. Log its value.device
Prop: EnsureuseCameraDevice
is returning a valid device object and it’s passed to<Camera>
. Log the device ID.- Native Errors: Check native logs (Xcode console / Android Logcat) for underlying errors (e.g., configuration conflicts, hardware access issues). Use
onError
prop. - Build Issues: Ensure the library is correctly linked and native dependencies (pods/gradle) are installed. Try cleaning builds (
pod install
,./gradlew clean
).
- Permissions: Ensure Camera permissions are granted. Check
- Permission Errors:
- Double-check
Info.plist
(NSCameraUsageDescription, NSMicrophoneUsageDescription) andAndroidManifest.xml
(<uses-permission>
). - Ensure you are correctly calling
Camera.requestCameraPermission()
/Camera.requestMicrophonePermission()
or usinguseCameraPermission
. - Handle the
'denied'
state gracefully (prompt user to go to settings viaLinking.openSettings()
).
- Double-check
- Build Failures (iOS/Android):
- CocoaPods: Ensure
pod install
in theios
directory ran successfully after adding the library. Check for compatibility issues with other pods or Xcode versions. Ensure minimum iOS target is correct. - Gradle: Check
minSdkVersion
, ensure Kotlin plugin is applied, check for conflicting dependencies or Gradle version issues. Sync project with Gradle files in Android Studio. Make surereact-native-reanimated
setup is complete. - Clean Builds: Always try cleaning the native build folders (
rm -rf ios/build
,rm -rf ios/Pods
,cd android && ./gradlew clean
) and reinstalling dependencies (npm install
,pod install
).
- CocoaPods: Ensure
- Frame Processor Lag / App Freeze:
- Missing
'worklet';
: Ensure the directive is the first line inside youruseFrameProcessor
function. - Heavy Worklet Code: Move complex logic out of the JS worklet into native plugins or simplify it.
- Excessive
runOnJS
Calls: Throttle or batch calls back to the main JS thread. - Incorrect
pixelFormat
: Using a format that requires slow conversion might impact performance. - High
frameProcessorFps
: Reduce the FPS if analysis isn’t needed on every frame. react-native-reanimated
Setup: Double-check Reanimated installation (Babel plugin, native setup).
- Missing
- Checking Logs and GitHub Issues:
- Monitor native logs (Logcat/Xcode Console) and JS logs (Metro bundler console / Flipper).
- Search the React Native Vision Camera GitHub repository issues for similar problems. Often, solutions or workarounds are discussed there.
15. Conclusion and Future Directions
React Native Vision Camera represents a significant leap forward for camera integration in React Native. Its focus on performance, modern API design using hooks, extensive configuration options, and the revolutionary Frame Processor architecture make it an incredibly powerful and flexible tool.
By leveraging JSI and worklets, it unlocks the potential for smooth, real-time video analysis directly within your React Native application, enabling features previously difficult or impossible to achieve without deep native development. While the setup requires careful attention to native configuration and permissions, and concepts like Frame Processors and worklets demand understanding, the payoff in terms of capability and performance is substantial.
As React Native continues to evolve with its New Architecture (TurboModules and Fabric), Vision Camera is poised to further benefit from these advancements, promising even tighter integration and performance gains. Whether you’re building simple photo capture features or complex computer vision applications, React Native Vision Camera provides a robust and future-forward foundation.
This guide has covered the essentials from installation to advanced features like Frame Processors. The next steps involve exploring specific Frame Processor plugins relevant to your needs, mastering coordinate transformations for UI overlays, optimizing performance for your target devices, and delving into the rich configuration options offered by camera formats. Embrace the power of Vision Camera and unlock the full potential of the device camera in your React Native projects.