API Changelog
API Changelog
This page documents notable changes to the AIFrame API. We recommend checking this page periodically for updates.
v2.2 (Current - January 2025)
- Added
eye_contact_correctionparameter: New boolean parameter for the/api/process-videoendpoint to enable or disable automatic eye contact correction. - Improved Lip-Sync Models: Deployed updated lip-sync models (e.g.,
lipsync2selectable vialip_sync_correction_model) offering higher accuracy and more natural results. - Enhanced Error Reporting: Error responses now include a more structured
errorfield and arequest_idfor easier troubleshooting. See Error Handling for details. - Optimized Processing Speed: Average video processing time reduced by approximately 40% for most common video formats and lengths due to backend optimizations.
- API Documentation Overhaul: Launched new developer documentation portal with Swagger/OpenAPI reference and detailed guides.
v1.5 (January 2025)
- Added Voice Selection Options: Introduced
voice_generatorandvoice_idparameters to the/api/process-videoendpoint, allowing selection of different speech synthesis engines and specific voices. - Progress Reporting in Status Endpoint: The
/status/{video_id}endpoint now includes aprogressfield (0.0 to 100.0) indicating the percentage of processing completion. - Improved Background Blur Quality: Enhanced the algorithm for the
blur_backgroundfeature, resulting in more natural and aesthetically pleasing blurs.
v1.0 (September 2024)
- Initial API Release: First public version of the AIFrame API.
- Core Endpoints:
POST /api/process-video: Submit videos for processing.GET /status/{video_id}: Check job status.
- Supported Features:
- Basic lip-sync correction.
- Background blur (
blur_backgroundparameter). - Asynchronous processing model with polling.
- API Key authentication (via
X-API-Keyheader orapi_keyquery parameter).