SPOT’IA – AI-Powered Personal Asset Management
SPOT’IA is an AI-powered personal digital asset management platform designed to help users organize, track, and protect their belongings throughout life’s stages. It allows individuals to catalog assets, upload documents, and capture photos, while AI enriches this data with detailed descriptions and estimated values. The platform centralizes receipts, warranties, and other important records in a secure digital repository. Users can access insights, receive AI-assisted guidance, and share relevant information with family, tenants, or insurers. By combining organization, intelligence, and accessibility, SPOT’IA simplifies asset management, reduces administrative overhead, and ensures peace of mind when making decisions about personal property.
Project Objectives
1. Build a Scalable, Production-Grade Platform from Scratch
Design and ship a cloud-native, full-stack system — web and mobile (iOS & Android) — engineered to serve hundreds of thousands of users.
2. Transform Any Visual or Document Input into a Structured Asset Record
Build a multi-modal AI pipeline — combining computer vision, OCR, and document parsing — that automatically extracts key asset attributes (brand, model, serial number, condition) from photos, videos, receipts, invoices, and contracts, and populates a structured, searchable asset catalog with near-zero manual input.
3. Enable AI-Driven Decision Making and Output Generation
Leverage Generative AI to produce contextual, actionable outputs grounded in the user's actual asset data including dynamic market and replacement valuations, warranty expiry detection, insurance gap analysis, and ready-to-use documentation for real-world scenarios such as insurance claims, rental property inventories, and asset transfer reports.
Challenges
1. Gen AI Workflow Orchestration
The core complexity lies in integrating multiple Gen AI models and third-party APIs into coherent, multi-step workflows where each model handles a specific task (recognition, extraction, valuation, generation) and outputs feed into the next stage. Building reliable, maintainable pipelines across heterogeneous AI services is a central engineering challenge.
2. Scale
The platform is built to serve 100,000+ users, processing 5,000,000+ images and 20,000+ videos per month. This drives significant requirements across media ingestion, async processing, storage architecture, and AI inference throughput.
3. Cloud Best Practices
To support this scale in production, the platform is architected around core cloud engineering principles: horizontal scalability, high availability, fault tolerance, data security and encryption, observability, and zero-downtime deployments.
4. FinOps
We apply FinOps discipline by rightsizing compute, optimizing storage tiers, controlling AI API usage, and maintaining full cost visibility, ensuring the platform stays economically sustainable as it grows.
Technologies
1. AI & Intelligent Workflows
The AI layer is built around OpenAI, Mistral, and SerpAPI, each serving distinct roles such as computer vision and OCR for asset recognition, LLM-based text processing for description generation and document analysis, and real-time market data retrieval for price and value estimation.
AI workflows are orchestrated using Prefect, self-managed on Kubernetes for security and full operational control. It enables scalable, multi-step AI pipelines with built-in retries, scheduling, monitoring, and versioning making production iterations safe and predictable.
LangFuse is used across all AI workflows for end-to-end observability, capturing traces to support debugging, performance analysis, and iterative fine-tuning of models in production.
2. AWS Cloud Infrastructure
The platform runs on AWS. We’ve built a Multi-Account Landing Zone following the AWS Well-Architected Framework.
Application workloads run on EKS, configured for high availability and production-grade operations with Karpenter, ArgoCD, CloudWatch, Secrets Manager, and more.
Supporting infrastructure includes RDS (relational data), S3 (media and document storage), ElastiCache (caching), RabbitMQ (async messaging), and CloudFront (global content delivery).
3. DevOps & Automation
Infrastructure is managed as code (IaC) with Terraform and Terragrunt, automated via CI/CD pipelines covering both infrastructure and application delivery. ArgoCD handles Helm-based rollouts, ensuring consistent, auditable, and zero-downtime deployments across all environments.
Process
The project was developed over an 8-month period by a cross-functional team comprising a Project Manager, AI Engineers, a DevOps Architect, and Backend, Frontend, and Mobile Engineers.
The development process was structured around the following key phases:
- Infrastructure Setup — Provisioning the AWS Landing Zone and deploying core cloud services, including EKS clusters, databases, orchestration systems for AI, message queues, auth services, etc.
- AI & ML Engineering — Designing and implementing AI workflows, fine-tuning machine learning models, and iterating on Generative AI prompts to optimize output quality and contextual accuracy.
- Backend Development — Architecting and deploying scalable backend services to support asset data processing, document ingestion pipelines, and AI inference endpoints.
- Frontend & Mobile Development — Building the web frontend and cross-platform mobile applications, translating AI capabilities into intuitive user-facing experiences.
Results
- Successfully launched a production-ready platform capable of supporting 100,000+ users, built on a cloud-native architecture with high availability and zero-downtime deployments.
- Designed and deployed multi-stage AI pipelines orchestrated with Prefect on Kubernetes, capable to process 5M+ images and 20K+ videos monthly. Pipelines integrate OpenAI (LLMs & vision), Mistral AI (LLMs), and SerpAPI, combining computer vision, OCR, and document parsing to transform unstructured inputs into structured asset records.
- Implemented an AI-driven automation layer for asset documentation, market-based valuation, and generation of structured outputs (insurance claims, rental inventories, asset transfer reports), significantly reducing manual processing overhead.
- Achieved cost-efficient scalability through Kubernetes-based auto-scaling, optimized storage tiering, and controlled AI API usage aligned with FinOps practices.
- Established end-to-end observability across AI workflows and infrastructure, improving debugging, performance tuning, and iteration speed in production.