API Changelog
API Changelog
This page documents notable changes to the AIFrame API. We recommend checking this page periodically for updates.
v2.2 (Current - January 2025)
- Added
eye_contact_correction
parameter: New boolean parameter for the/api/process-video
endpoint to enable or disable automatic eye contact correction. - Improved Lip-Sync Models: Deployed updated lip-sync models (e.g.,
lipsync2
selectable vialip_sync_correction_model
) offering higher accuracy and more natural results. - Enhanced Error Reporting: Error responses now include a more structured
error
field and arequest_id
for easier troubleshooting. See Error Handling for details. - Optimized Processing Speed: Average video processing time reduced by approximately 40% for most common video formats and lengths due to backend optimizations.
- API Documentation Overhaul: Launched new developer documentation portal with Swagger/OpenAPI reference and detailed guides.
v1.5 (January 2025)
- Added Voice Selection Options: Introduced
voice_generator
andvoice_id
parameters to the/api/process-video
endpoint, allowing selection of different speech synthesis engines and specific voices. - Progress Reporting in Status Endpoint: The
/status/{video_id}
endpoint now includes aprogress
field (0.0 to 100.0) indicating the percentage of processing completion. - Improved Background Blur Quality: Enhanced the algorithm for the
blur_background
feature, resulting in more natural and aesthetically pleasing blurs.
v1.0 (September 2024)
- Initial API Release: First public version of the AIFrame API.
- Core Endpoints:
POST /api/process-video
: Submit videos for processing.GET /status/{video_id}
: Check job status.
- Supported Features:
- Basic lip-sync correction.
- Background blur (
blur_background
parameter). - Asynchronous processing model with polling.
- API Key authentication (via
X-API-Key
header orapi_key
query parameter).