Capture photos and record video and audio; configure built-in cameras and microphones or external capture devices.
Framework
- AVFoundation
Overview
The AVFoundation Capture subsystem provides a common high-level architecture for video, photo, and audio capture services in iOS and macOS. Use this system if you want to:
Build a custom camera UI to integrate shooting photos or videos into your app's user experience
Give users more direct control over photo and video capture, such as focus, exposure, and stabilization options
Produce different results than the system camera UI, such as RAW format photos, depth maps, or videos with custom timed metadata
Get live access to pixel or audio data streaming directly from a capture device
Note
To instead let the user capture media with the system camera UI within your app, see UIImage.
The main parts of the capture architecture are sessions, inputs, and outputs: Capture sessions connect one or more inputs to one or more outputs. Inputs are sources of media, including capture devices like the cameras and microphones built into an iOS device or Mac. Outputs acquire media from inputs to produce useful data, such as movie files written to disk or raw pixel buffers available for live processing.
