Overview
Import complete Apple Health data exports via XML files using one of two methods:
S3 Presigned URL (Recommended) - For large files, uses direct S3 upload. The frontend handles this automatically; the scripts below are for manual or testing purposes.
Direct Upload - For smaller files or testing, uploads directly to the API
Authentication
All endpoints require authentication via Bearer token (user login) or API key.
# Login to get access token
POST /api/v1/auth/login
Content-Type: application/x-www-form-urlencoded
username = user@example.com & password = yourpassword
# Response
{
"access_token" : "eyJ...",
"token_type" : "bearer"
}
Then use the token in subsequent requests:
Authorization: Bearer eyJ...
Endpoints
Method 1: S3 Presigned URL (Recommended)
Best for large files (10MB+). Uploads directly to S3, then processes asynchronously.
Step 1: Request Presigned URL
curl -X POST "https://api.example.com/api/v1/users/{user_id}/import/apple/xml/s3" \
-H "accept: application/json" \
-H "Authorization: Bearer <access_token>" \
-H "Content-Type: application/json" \
-d '{
"filename": "export.xml",
"expiration_seconds": 300,
"max_file_size": 52428800
}'
The ID of the user to import data for
Custom filename (max 200 characters)
URL expiration time in seconds (60 - 3600)
max_file_size
integer
default: "52428800"
Maximum file size in bytes (1KB - 500MB). Default is 50MB.
200 Success
401 Unauthorized
{
"upload_url" : "https://s3.amazonaws.com/bucket/key" ,
"form_fields" : {
"key" : "apple-uploads/user-123/file-456.xml" ,
"AWSAccessKeyId" : "AKIA..." ,
"policy" : "eyJ..." ,
"signature" : "abc123..."
},
"file_key" : "apple-uploads/user-123/file-456.xml" ,
"expires_in" : 300 ,
"max_file_size" : 52428800 ,
"bucket" : "my-bucket"
}
Step 2: Upload File to S3
Use the upload_url and form_fields from the previous response to upload your XML file:
# Upload using form_fields
with open ( 'export.xml' , 'rb' ) as f:
files = { 'file' : ( 'export.xml' , f, 'application/xml' )}
upload_response = requests.post(
presigned_data[ 'upload_url' ],
data = presigned_data[ 'form_fields' ],
files = files
)
upload_response.raise_for_status()
print ( f "✓ Uploaded successfully! File key: { presigned_data[ 'file_key' ] } " )
Important: When uploading to S3 with presigned POST, you must include all form_fields as form data, and the file field must be last.
Complete Example Workflow
Login to Get Access Token
import requests
API_URL = "https://api.example.com"
USERNAME = "user@example.com"
PASSWORD = "yourpassword"
USER_ID = "3fa85f64-5717-4562-b3fc-2c963f66afa6"
# Login
login_response = requests.post(
f " { API_URL } /api/v1/auth/login" ,
data = { "username" : USERNAME , "password" : PASSWORD }
)
login_response.raise_for_status()
access_token = login_response.json()[ "access_token" ]
Request Presigned URL
# Get presigned URL
headers = {
"accept" : "application/json" ,
"Content-Type" : "application/json" ,
"Authorization" : f "Bearer { access_token } "
}
response = requests.post(
f " { API_URL } /api/v1/users/ { USER_ID } /import/apple/xml/s3" ,
headers = headers,
json = {
"filename" : "export.xml" ,
"expiration_seconds" : 300 ,
"max_file_size" : 52428800 # 50MB
}
)
response.raise_for_status()
presigned_data = response.json()
upload_url = presigned_data[ "upload_url" ]
form_fields = presigned_data[ "form_fields" ]
Upload File to S3
# Upload to S3 using presigned POST
with open ( "export.xml" , "rb" ) as f:
files = { "file" : ( "export.xml" , f, "application/xml" )}
upload_response = requests.post(
upload_url,
data = form_fields,
files = files
)
upload_response.raise_for_status()
print ( f "✓ Uploaded! File key: { presigned_data[ 'file_key' ] } " )
Processing Happens Automatically
The system automatically:
Detects the S3 upload via S3 event notification → SNS
SNS sends an HTTPS notification to the backend
process_aws_upload Celery task downloads and processes the XML file
Imports workouts and time series data to database
View Complete Script (Copy & Run)
#!/usr/bin/env python3
"""
Upload Apple Health XML export to Open Wearables using S3.
Usage: python upload_xml_s3.py export.xml
"""
import sys
from pathlib import Path
import requests
API_URL = "https://api.example.com"
USERNAME = "user@example.com"
PASSWORD = "yourpassword"
USER_ID = "your-user-id"
def main ():
if len (sys.argv) < 2 :
print ( "Usage: python upload_xml_s3.py <file_path>" )
sys.exit( 1 )
file_path = Path(sys.argv[ 1 ])
if not file_path.exists():
print ( f "Error: File not found: { file_path } " )
sys.exit( 1 )
# Step 1: Login
print ( "Logging in..." )
login_response = requests.post(
f " { API_URL } /api/v1/auth/login" ,
data = { "username" : USERNAME , "password" : PASSWORD }
)
login_response.raise_for_status()
access_token = login_response.json()[ "access_token" ]
print ( "✓ Logged in" )
# Step 2: Get presigned URL
print ( "Getting presigned URL..." )
headers = {
"accept" : "application/json" ,
"Content-Type" : "application/json" ,
"Authorization" : f "Bearer { access_token } "
}
response = requests.post(
f " { API_URL } /api/v1/users/ { USER_ID } /import/apple/xml/s3" ,
headers = headers,
json = {
"filename" : file_path.name,
"expiration_seconds" : 300 ,
"max_file_size" : 52428800
}
)
response.raise_for_status()
presigned_data = response.json()
print ( "✓ Got presigned URL" )
# Step 3: Upload to S3
print ( f "Uploading { file_path.name } ..." )
with open (file_path, "rb" ) as f:
files = { "file" : (file_path.name, f, "application/xml" )}
upload_response = requests.post(
presigned_data[ "upload_url" ],
data = presigned_data[ "form_fields" ],
files = files
)
upload_response.raise_for_status()
print ( f "✓ Upload complete! File key: { presigned_data[ 'file_key' ] } " )
print ( "Processing will happen in the background..." )
if __name__ == "__main__" :
main()
Prepare API Key
import requests
API_URL = "https://api.example.com"
API_KEY = "your-api-key"
USER_ID = "3fa85f64-5717-4562-b3fc-2c963f66afa6"
Upload File
# Upload directly to API
with open ( "export.xml" , "rb" ) as f:
response = requests.post(
f " { API_URL } /api/v1/users/ { USER_ID } /import/apple/xml/direct" ,
headers = { "X-Open-Wearables-API-Key" : API_KEY },
files = { "file" : f}
)
response.raise_for_status()
result = response.json()
print ( f "✓ Upload complete!" )
print ( f "Status: { result[ 'status' ] } " )
print ( f "Task ID: { result[ 'task_id' ] } " )
Processing Happens in Background
The system:
Receives the file contents
Queues a process_xml_upload Celery task
Returns immediately with task ID
Celery worker processes the XML file
Imports data to database
View Complete Script (Copy & Run)
#!/usr/bin/env python3
"""
Upload Apple Health XML export directly to Open Wearables API.
Usage: python upload_xml_direct.py export.xml
"""
import sys
from pathlib import Path
import requests
API_URL = "https://api.example.com"
API_KEY = "your-api-key"
USER_ID = "your-user-id"
def main ():
if len (sys.argv) < 2 :
print ( "Usage: python upload_xml_direct.py <file_path>" )
sys.exit( 1 )
file_path = Path(sys.argv[ 1 ])
if not file_path.exists():
print ( f "Error: File not found: { file_path } " )
sys.exit( 1 )
# Check file size
file_size_mb = file_path.stat().st_size / ( 1024 * 1024 )
if file_size_mb > 10 :
print ( f "Warning: File is { file_size_mb :.1f} MB. Consider using S3 method for large files." )
# Upload directly
print ( f "Uploading { file_path.name } ..." )
with open (file_path, "rb" ) as f:
response = requests.post(
f " { API_URL } /api/v1/users/ { USER_ID } /import/apple/xml/direct" ,
headers = { "X-Open-Wearables-API-Key" : API_KEY },
files = { "file" : f}
)
response.raise_for_status()
result = response.json()
print ( f "✓ Upload complete!" )
print ( f "Status: { result[ 'status' ] } " )
print ( f "Task ID: { result[ 'task_id' ] } " )
print ( "Processing will happen in the background..." )
if __name__ == "__main__" :
main()
Data Imported
Workouts
Activity type (running, cycling, swimming, etc.)
Duration and timestamps
Distance, calories, elevation
Heart rate statistics (min/max/avg)
Time Series Samples
Heart rate
Steps
Active energy
Distance
Blood oxygen
And 100+ other metrics
See Data Types Guide for complete list.
Best Practices
Use S3 for Large Files Files over 10MB should use the presigned URL method to avoid timeouts
Handle Async Processing Import processing is asynchronous. Don’t expect immediate data availability
Monitor Task Status Use Celery Flower or logs to monitor processing status
Dedupe Handled Automatically Records with the same external_id won’t be duplicated