AWS S3 Private Storage Setup with Presigned URLs¶
This document explains how to configure AWS S3 storage with private bucket access using presigned URLs, compatible with AWS Control Tower security requirements.
Overview¶
The application now supports private S3 buckets with secure presigned URL generation. This means: - ✅ S3 bucket remains private (no public access required) - ✅ Compatible with AWS Control Tower non-public access policies - ✅ Secure, time-limited presigned URLs generated on-demand - ✅ URLs cached for performance (80% of expiration time) - ✅ Configurable URL expiration (default: 7 days)
How It Works¶
- Upload: Files are uploaded to your private S3 bucket
- Storage: Only the S3 key (path) is stored in the database
- Access: When an image is requested, a presigned URL is generated automatically
- Caching: Presigned URLs are cached to reduce AWS API calls
- Security: URLs expire after the configured time period
AWS S3 Configuration¶
Step 1: S3 Bucket Setup¶
- Create your S3 bucket (if not already created)
- Keep the bucket PRIVATE - do not enable public access
- Block all public access - this is the default and recommended setting
Step 2: Configure CORS (Required)¶
Your S3 bucket needs CORS configuration to allow browser access to presigned URLs.
Go to your S3 bucket → Permissions → CORS configuration and add:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"https://yourdomain.com",
"https://www.yourdomain.com"
],
"ExposeHeaders": [
"ETag",
"Content-Length",
"Content-Type"
],
"MaxAgeSeconds": 3600
}
]
Important: Replace yourdomain.com with your actual domain(s).
For development, you can add:
Step 3: IAM User Permissions¶
Ensure your IAM user (or role) has these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name",
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}
Note: You do NOT need s3:PutObjectAcl or public access permissions.
Step 4: Bucket Policy (Optional)¶
For extra security, you can add a bucket policy to deny public access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyPublicAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/*",
"Condition": {
"StringNotEquals": {
"aws:PrincipalType": "User"
}
}
}
]
}
Application Configuration¶
Environment Variables¶
Add/update these in your .env file:
# AWS S3 Configuration
AWS_ACCESS_KEY_ID=your_access_key_id
AWS_SECRET_ACCESS_KEY=your_secret_access_key
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=your-bucket-name
# Optional: Presigned URL expiration time (in minutes)
# Default: 10080 (7 days)
S3_PRESIGNED_URL_EXPIRATION=10080
Presigned URL Expiration Options¶
Configure how long presigned URLs remain valid:
- 1 day:
S3_PRESIGNED_URL_EXPIRATION=1440 - 7 days (default):
S3_PRESIGNED_URL_EXPIRATION=10080 - 30 days:
S3_PRESIGNED_URL_EXPIRATION=43200
Recommendation: Use 7 days (default). URLs are cached and automatically refreshed.
How Presigned URLs Work¶
Automatic Generation¶
Images stored in S3 automatically get presigned URLs when accessed:
// Database stores: "uploads/abc123.png" (S3 key only)
// Model accessor automatically generates presigned URL:
$image->output_url;
// Returns: "https://bucket.s3.amazonaws.com/uploads/abc123.png?X-Amz-Algorithm=..."
Caching¶
- Presigned URLs are cached for 80% of their expiration time
- Cache key:
presigned_url_{disk}_{path} - Automatic cache invalidation when URL nears expiration
- Reduces AWS API calls and improves performance
Manual URL Generation¶
If you need to generate presigned URLs manually:
use App\Services\S3PresignedUrlService;
// Generate presigned URL (default expiration)
$url = S3PresignedUrlService::generatePresignedUrl('path/to/file.png', 's3');
// Generate with custom expiration (in minutes)
$url = S3PresignedUrlService::generatePresignedUrl('path/to/file.png', 's3', 1440); // 1 day
// Get URL with automatic storage type detection
$url = S3PresignedUrlService::getFileUrl($path, 'S3');
// Clear cached URL (force regeneration on next access)
S3PresignedUrlService::clearCachedUrl('path/to/file.png', 's3');
Migration from Public S3 Bucket¶
If you're migrating from a public S3 bucket setup:
Backwards Compatibility¶
The system handles both old (full URLs) and new (S3 keys) formats:
- Old records: Full URLs like
https://bucket.s3.amazonaws.com/file.pngwork as-is - New records: S3 keys like
file.pngautomatically get presigned URLs
Migration Steps¶
- Update application with these changes
- Deploy to production
- Test with new uploads - they should use S3 keys
- Optionally migrate old URLs:
// Optional: Migrate old URLs to S3 keys
UserOpenai::where('storage', 's3')
->where('output', 'LIKE', 'https://%')
->chunk(100, function ($images) {
foreach ($images as $image) {
// Extract S3 key from full URL
$parsed = parse_url($image->output);
$s3Key = ltrim($parsed['path'], '/');
$image->output = $s3Key;
$image->save();
}
});
Cloudflare R2 Configuration¶
The same presigned URL system works with Cloudflare R2:
CLOUDFLARE_R2_ACCESS_KEY_ID=your_r2_access_key
CLOUDFLARE_R2_SECRET_ACCESS_KEY=your_r2_secret_key
CLOUDFLARE_R2_BUCKET=your-r2-bucket
CLOUDFLARE_R2_ENDPOINT=https://your-account-id.r2.cloudflarestorage.com
CLOUDFLARE_R2_DEFAULT_REGION=auto
# Optional: Custom domain (if configured)
CLOUDFLARE_R2_URL=https://image.yourdomain.com
Note: If you have a custom R2 domain configured and it's public, the system will use that instead of presigned URLs.
Troubleshooting¶
Issue: Images not loading¶
Check:
1. CORS configuration in S3 bucket
2. Your domain is in AllowedOrigins
3. Cache cleared: php artisan cache:clear
4. Logs: Check storage/logs/laravel.log for errors
Issue: "Access Denied" errors¶
Check:
1. IAM user has correct permissions
2. Bucket policy doesn't block IAM user
3. AWS credentials in .env are correct
4. Bucket region matches AWS_DEFAULT_REGION
Issue: URLs expire too quickly¶
Solution: Increase expiration time in .env:
Issue: Too many AWS API calls¶
Solution: Presigned URLs are already cached. If you're still seeing high API usage:
1. Check cache is working: php artisan cache:clear then test
2. Increase expiration time
3. Check for code generating URLs in loops
Security Considerations¶
✅ Recommended Security Settings¶
- Private bucket - Block all public access
- Time-limited URLs - Use 7-day expiration (default)
- CORS restrictions - Only allow your domain(s)
- IAM principle of least privilege - Only required S3 permissions
- AWS CloudTrail - Monitor S3 access logs
🔒 AWS Control Tower Compliance¶
This implementation is fully compatible with AWS Control Tower: - ✅ No public bucket access required - ✅ No public ACLs needed - ✅ Follows AWS security best practices - ✅ Uses IAM-based access control - ✅ Time-limited access tokens
Performance Optimization¶
Caching Strategy¶
- URLs cached for 80% of expiration time
- Example: 7-day expiration → cached for ~5.6 days
- Automatic regeneration before expiration
- Cache key includes disk and path for isolation
CDN Integration (Optional)¶
For even better performance, consider:
- Amazon CloudFront in front of S3
- Configure CloudFront to use presigned URLs
-
Add CloudFront domain to CORS
-
Cloudflare (if using R2)
- R2 with custom domain provides CDN automatically
- No additional configuration needed
Testing¶
Verify Setup¶
- Upload test image via your application
- Check database - should store S3 key, not full URL
- View image - should display correctly
- Inspect URL - should include
X-Amz-query parameters - Check expiration - URL should include
X-Amz-Expiresparameter
Test Presigned URL Generation¶
// Test presigned URL generation
use App\Services\S3PresignedUrlService;
$url = S3PresignedUrlService::generatePresignedUrl('test-file.png', 's3');
echo $url;
// Should output a URL with X-Amz-Algorithm, X-Amz-Credential, etc.
// Verify URL structure
parse_url($url);
Support¶
For issues or questions:
1. Check Laravel logs: storage/logs/laravel.log
2. Check AWS CloudTrail for S3 access logs
3. Review this documentation
4. Check AWS S3 documentation: https://docs.aws.amazon.com/s3/
Summary¶
✅ Private S3 bucket - No public access needed
✅ AWS Control Tower compatible - Meets security requirements
✅ Secure presigned URLs - Time-limited access
✅ Performance optimized - Intelligent caching
✅ Backwards compatible - Works with existing data
✅ Easy configuration - Just update .env
Your S3 storage is now secure, compliant, and optimized for performance! 🚀