Cloud Upload API Documentation
Overview
The Lilt Cloud Upload API provides secure file upload capabilities using S3-compatible presigned URLs. This system supports both single file uploads and multipart uploads for large files, enabling efficient and secure file transfers directly to cloud storage.
Key Features
- Direct S3 Uploads: Files are uploaded directly to S3-compatible storage, reducing server load
- Presigned URL Security: Time-limited, secure URLs with no exposed credentials
- Multipart Upload Support: Efficient handling of large files through chunked uploads
- Flexible Client Support: Use any HTTP client library in your preferred programming language
- Managed Infrastructure: LILT handles all bucket configuration, CORS setup, and storage management
How It Works
The upload process uses AWS S3-compatible presigned URLs for secure file transfers:
- Request Upload Parameters: Call the Lilt API to get presigned URL and upload parameters
- Upload Directly to Storage: Use the presigned URL to upload files directly to S3-compatible storage
- Complete Upload:
- For multipart uploads, notify the API when all parts are uploaded
- Poll for antivirus scan completion to get the File ID which you can use for other endpoints (add to jobs or projects)
API Endpoints
1. Initiate Single File Upload
Endpoint: POST /v2/upload/s3/params
or GET /v2/upload/s3/params
Initiates a single file upload and returns presigned URL for direct upload to storage.
Request Body
{
"filename": "document.pdf",
"type": "application/pdf",
"metadata": {
"size": 1048576,
"category": "SOURCE",
"uuid": "123e4567-e89b-12d3-a456-426614174000"
}
}
Parameters
Field | Type | Required | Description |
---|---|---|---|
filename | string | Yes | File name including extension |
type | string | Yes | MIME type of the file |
metadata.size | integer | No | File size in bytes |
metadata.category | string | No | File category (SOURCE or REFERENCE) |
metadata.uuid | string | No | Unique identifier for the file |
Response
{
"url": "https://storage.example.com/bucket/path/file.pdf?AWSAccessKeyId=...",
"method": "PUT",
"filename": "text2.txt",
"contentType": "text/plain",
"metadata": {
"size": 1048576,
"category": "SOURCE",
"uuid": "123e4567-e89b-12d3-a456-426614174000"
},
"upload": {
"createdAt": "2025-07-08T11:11:27.674Z",
"updatedAt": "2025-07-08T11:11:27.674Z",
"isDeleted": false,
"deletedAt": null,
"id": 362,
"UserId": 10727,
"OrganizationId": 682,
"fileLocation": "gs://.../text2.txt",
"status": "UPLOADING",
"totalBytes": 333,
"uploadedBytes": 0,
"category": "SOURCE"
}
}
2. Initiate Multipart Upload
Endpoint: POST /v2/upload/s3/multipart
Initiates a multipart upload for large files (recommended for files > 100MB).
Make sure your part size is set to 8MB (8388608 bytes).
Request Body
{
"filename": "large-file.zip",
"type": "application/zip",
"metadata": {
"size": 104857600,
"category": "SOURCE"
}
}
Response
{
"uploadId": "abc123def456",
"key": "uploads/user123/large-file.zip",
"upload": {
"createdAt": "2025-07-08T11:11:27.674Z",
"updatedAt": "2025-07-08T11:11:27.674Z",
"isDeleted": false,
"deletedAt": null,
"id": 362,
"UserId": 10727,
"OrganizationId": 682,
"fileLocation": "gs://.../text2.txt",
"status": "UPLOADING",
"totalBytes": 333,
"uploadedBytes": 0,
"category": "SOURCE"
}
}
3. Get Upload Part URL
Endpoint: GET /v2/upload/s3/multipart/{uploadId}/{partNumber}
Retrieves a presigned URL for uploading a specific part of a multipart upload.
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
uploadId | string | Yes | Multipart upload ID from initiate response |
partNumber | integer | Yes | Part number (1-based, 1-10000) |
s3Key | string | Yes | Upload key from initiate response |
size | integer | Yes | Size of the file |
Example Request
GET /v2/upload/s3/multipart/abc123def456/1?s3Key=uploads/user123/large-file.zip&size=104857600
Response
{
"url": "https://storage.example.com/bucket/path/file.zip?partNumber=1&uploadId=...",
"method": "PUT"
}
4. Complete Multipart Upload
Endpoint: POST /v2/upload/s3/multipart/{uploadId}/complete
Completes a multipart upload by providing information about all uploaded parts.
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
uploadId | string | Yes | Multipart upload ID |
s3Key | string | Yes | Upload key from initiate response |
Request Body
{
"parts": [
{
"ETag": "\"abc123def456\"",
"PartNumber": 1
},
{
"ETag": "\"def789ghi012\"",
"PartNumber": 2
}
]
}
Response
{
"success": true,
"location": "https://storage.example.com/bucket/uploads/user123/large-file.zip"
}
5. Cancel Multipart Upload
Endpoint: DELETE /v2/upload/s3/multipart/{uploadId}
Cancels a multipart upload and cleans up any uploaded parts.
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
uploadId | string | Yes | Multipart upload ID to cancel |
s3Key | string | Yes | Upload key from initiate response |
Response
{
"success": true,
"message": "Multipart upload cancelled successfully"
}
Upload Workflows
Simple Upload Flow
1. POST /v2/upload/s3/params → Get presigned URL and upload object
2. PUT to presigned URL → Upload file directly to S3
3. Poll GET /v2/upload/:uploadId for AV scan completion
4. File is ready for use when upload.status=SUCCESS and upload.FileId is a number
Multipart Upload Flow
1. POST /v2/upload/s3/multipart → Get upload ID and key
2. For each part:
- GET /v2/upload/s3/multipart/{uploadId}/{partNumber} → Get part URL
- PUT to part URL → Upload part to S3
- Save ETag from response
3. POST /v2/upload/s3/multipart/{uploadId}/complete → Complete upload
4. Poll GET /v2/upload/:uploadId for AV scan completion
5. File is ready for use when upload.status=SUCCESS and upload.FileId is a number
Implementation Examples
Node.js Example
const axios = require("axios");
const fs = require("fs");
// Initialize axios with base URL
const api = axios.create({
baseURL: "https://lilt.com/v2",
headers: {
Authorization: "Bearer YOUR_API_KEY"
}
});
// Single file upload
async function uploadFile(filePath, filename, contentType) {
const fileBuffer = fs.readFileSync(filePath);
// 1. Initiate upload
const { data: uploadParams } = await api.post("/upload/s3/params", {
filename,
type: contentType,
metadata: {
size: fileBuffer.length
}
});
// 2. Upload to S3 using presigned URL
await axios.put(uploadParams.url, fileBuffer, {
headers: {
"Content-Type": contentType,
...uploadParams.headers
}
});
console.log("Upload successful");
}
// Multipart upload for large files
async function uploadLargeFile(filePath, filename, contentType) {
const fileBuffer = fs.readFileSync(filePath);
const chunkSize = 5 * 1024 * 1024; // 5MB chunks
// 1. Initiate multipart upload
const { data: initResponse } = await api.post("/upload/s3/multipart", {
filename,
type: contentType,
metadata: {
size: fileBuffer.length
}
});
const { uploadId, key } = initResponse;
const parts = [];
// 2. Upload each part
for (let i = 0; i < fileBuffer.length; i += chunkSize) {
const partNumber = Math.floor(i / chunkSize) + 1;
const chunk = fileBuffer.slice(i, i + chunkSize);
// Get presigned URL for this part
const { data: partParams } = await api.get(
`/upload/s3/multipart/${uploadId}/${partNumber}`,
{
params: {
s3Key: key,
size: chunk.length
}
}
);
// Upload part
const response = await axios.put(partParams.url, chunk, {
headers: {
"Content-Type": contentType
}
});
parts.push({
ETag: response.headers.etag,
PartNumber: partNumber
});
}
// 3. Complete multipart upload
await api.post(
`/upload/s3/multipart/${uploadId}/complete`,
{
parts
},
{
params: {
s3Key: key
}
}
);
console.log("Multipart upload successful");
}
Python Example
import requests
import os
from typing import List, Dict
class LiltUploadClient:
def __init__(self, api_key: str, base_url: str = "https://lilt.com/v2"):
self.api_key = api_key
self.base_url = base_url
self.headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
def upload_file(self, file_path: str, filename: str, content_type: str):
"""Upload a single file"""
with open(file_path, 'rb') as f:
file_data = f.read()
# 1. Initiate upload
response = requests.post(
f"{self.base_url}/upload/s3/params",
json={
"filename": filename,
"type": content_type,
"metadata": {
"size": len(file_data)
}
},
headers=self.headers
)
response.raise_for_status()
upload_params = response.json()
# 2. Upload to S3
upload_headers = {
'Content-Type': content_type,
**upload_params.get('headers', {})
}
upload_response = requests.put(
upload_params['url'],
data=file_data,
headers=upload_headers
)
upload_response.raise_for_status()
print("Upload successful")
def upload_large_file(self, file_path: str, filename: str, content_type: str, chunk_size: int = 5 * 1024 * 1024):
"""Upload a large file using multipart upload"""
file_size = os.path.getsize(file_path)
# 1. Initiate multipart upload
response = requests.post(
f"{self.base_url}/upload/s3/multipart",
json={
"filename": filename,
"type": content_type,
"metadata": {
"size": file_size
}
},
headers=self.headers
)
response.raise_for_status()
init_response = response.json()
upload_id = init_response['uploadId']
key = init_response['key']
parts = []
# 2. Upload each part
with open(file_path, 'rb') as f:
part_number = 1
while True:
chunk = f.read(chunk_size)
if not chunk:
break
# Get presigned URL for this part
part_response = requests.get(
f"{self.base_url}/upload/s3/multipart/{upload_id}/{part_number}",
params={
's3Key': key,
'size': len(chunk)
},
headers=self.headers
)
part_response.raise_for_status()
part_params = part_response.json()
# Upload part
upload_response = requests.put(
part_params['url'],
data=chunk,
headers={'Content-Type': content_type}
)
upload_response.raise_for_status()
parts.append({
'ETag': upload_response.headers['etag'],
'PartNumber': part_number
})
part_number += 1
# 3. Complete multipart upload
complete_response = requests.post(
f"{self.base_url}/upload/s3/multipart/{upload_id}/complete",
json={'parts': parts},
params={'s3Key': key},
headers=self.headers
)
complete_response.raise_for_status()
print("Multipart upload successful")
# Usage
client = LiltUploadClient('your-api-key')
client.upload_file('/path/to/file.pdf', 'document.pdf', 'application/pdf')
Java Example
import java.io.*;
import java.net.http.*;
import java.util.*;
import com.fasterxml.jackson.databind.ObjectMapper;
public class LiltUploadClient {
private final String apiKey;
private final String baseUrl;
private final HttpClient httpClient;
private final ObjectMapper objectMapper;
public LiltUploadClient(String apiKey, String baseUrl) {
this.apiKey = apiKey;
this.baseUrl = baseUrl != null ? baseUrl : "https://lilt.com/v2";
this.httpClient = HttpClient.newHttpClient();
this.objectMapper = new ObjectMapper();
}
public void uploadFile(String filePath, String filename, String contentType) throws Exception {
byte[] fileData = Files.readAllBytes(Paths.get(filePath));
// 1. Initiate upload
Map<String, Object> uploadRequest = new HashMap<>();
uploadRequest.put("filename", filename);
uploadRequest.put("type", contentType);
Map<String, Object> metadata = new HashMap<>();
metadata.put("size", fileData.length);
uploadRequest.put("metadata", metadata);
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(baseUrl + "/upload/s3/params"))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(objectMapper.writeValueAsString(uploadRequest)))
.build();
HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString());
if (response.statusCode() != 200) {
throw new RuntimeException("Failed to initiate upload: " + response.body());
}
Map<String, Object> uploadParams = objectMapper.readValue(response.body(), Map.class);
// 2. Upload to S3
HttpRequest.Builder uploadRequestBuilder = HttpRequest.newBuilder()
.uri(URI.create((String) uploadParams.get("url")))
.header("Content-Type", contentType)
.PUT(HttpRequest.BodyPublishers.ofByteArray(fileData));
// Add any additional headers
Map<String, String> headers = (Map<String, String>) uploadParams.get("headers");
if (headers != null) {
headers.forEach(uploadRequestBuilder::header);
}
HttpRequest uploadRequest = uploadRequestBuilder.build();
HttpResponse<String> uploadResponse = httpClient.send(uploadRequest, HttpResponse.BodyHandlers.ofString());
if (uploadResponse.statusCode() != 200) {
throw new RuntimeException("Failed to upload file: " + uploadResponse.body());
}
System.out.println("Upload successful");
}
public void uploadLargeFile(String filePath, String filename, String contentType) throws Exception {
File file = new File(filePath);
long fileSize = file.length();
int chunkSize = 5 * 1024 * 1024; // 5MB chunks
// 1. Initiate multipart upload
Map<String, Object> uploadRequest = new HashMap<>();
uploadRequest.put("filename", filename);
uploadRequest.put("type", contentType);
Map<String, Object> metadata = new HashMap<>();
metadata.put("size", fileSize);
uploadRequest.put("metadata", metadata);
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(baseUrl + "/upload/s3/multipart"))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(objectMapper.writeValueAsString(uploadRequest)))
.build();
HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString());
Map<String, Object> initResponse = objectMapper.readValue(response.body(), Map.class);
String uploadId = (String) initResponse.get("uploadId");
String key = (String) initResponse.get("key");
List<Map<String, Object>> parts = new ArrayList<>();
// 2. Upload each part
try (FileInputStream fis = new FileInputStream(file)) {
byte[] buffer = new byte[chunkSize];
int partNumber = 1;
int bytesRead;
while ((bytesRead = fis.read(buffer)) != -1) {
byte[] chunk = bytesRead == chunkSize ? buffer : Arrays.copyOf(buffer, bytesRead);
// Get presigned URL for this part
String partUrl = String.format("%s/upload/s3/multipart/%s/%d?s3Key=%s&size=%d",
baseUrl, uploadId, partNumber, key, chunk.length);
HttpRequest partRequest = HttpRequest.newBuilder()
.uri(URI.create(partUrl))
.header("Authorization", "Bearer " + apiKey)
.GET()
.build();
HttpResponse<String> partResponse = httpClient.send(partRequest, HttpResponse.BodyHandlers.ofString());
Map<String, Object> partParams = objectMapper.readValue(partResponse.body(), Map.class);
// Upload part
HttpRequest uploadPartRequest = HttpRequest.newBuilder()
.uri(URI.create((String) partParams.get("url")))
.header("Content-Type", contentType)
.PUT(HttpRequest.BodyPublishers.ofByteArray(chunk))
.build();
HttpResponse<String> uploadPartResponse = httpClient.send(uploadPartRequest, HttpResponse.BodyHandlers.ofString());
Map<String, Object> part = new HashMap<>();
part.put("ETag", uploadPartResponse.headers().firstValue("etag").orElse(""));
part.put("PartNumber", partNumber);
parts.add(part);
partNumber++;
}
}
// 3. Complete multipart upload
Map<String, Object> completeRequest = new HashMap<>();
completeRequest.put("parts", parts);
HttpRequest completeHttpRequest = HttpRequest.newBuilder()
.uri(URI.create(String.format("%s/upload/s3/multipart/%s/complete?s3Key=%s", baseUrl, uploadId, key)))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(objectMapper.writeValueAsString(completeRequest)))
.build();
HttpResponse<String> completeResponse = httpClient.send(completeHttpRequest, HttpResponse.BodyHandlers.ofString());
if (completeResponse.statusCode() != 200) {
throw new RuntimeException("Failed to complete upload: " + completeResponse.body());
}
System.out.println("Multipart upload successful");
}
}
// Usage
LiltUploadClient client = new LiltUploadClient("your-api-key", null);
client.uploadFile("/path/to/file.pdf", "document.pdf", "application/pdf");
Best Practices
File Size Recommendations
- Small files (< 100MB): Use single file upload (
POST /upload/s3/params
) - Large files (> 100MB): Use multipart upload (
POST /upload/s3/multipart
) - Chunk size: Use 8MB chunks for multipart uploads
Error Handling
Implement retry logic for network errors:
async function uploadWithRetry(uploadFn, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
await uploadFn();
return;
} catch (error) {
if (i === maxRetries - 1) throw error;
await new Promise(resolve =>
setTimeout(resolve, 1000 * Math.pow(2, i))
);
}
}
}
Security Considerations
- Presigned URLs expire after a configurable time (typically 1 hour)
- URLs are single-use for uploads
- All uploads are validated against the original request parameters
- HTTPS is required for all upload operations
Common Error Responses
400 Bad Request
{
"error": {
"message": "Invalid file type",
"code": "INVALID_FILE_TYPE"
}
}
413 Payload Too Large
{
"error": {
"message": "File size exceeds maximum allowed size",
"code": "FILE_TOO_LARGE",
"details": {
"maxSize": 104857600,
"actualSize": 209715200
}
}
}
403 Forbidden
{
"error": {
"message": "Upload URL has expired",
"code": "URL_EXPIRED"
}
}
Support
For questions or issues with the Cloud Upload API, please contact support@lilt.com or refer to the main API documentation.