How I Built an AI-Powered Pet Nutrition Engine with Django and OpenAI
Designing a Django-based system for personalized pet meal planning and structured health insights
Introduction
Generic, one-size-fits-all pet diets are a problem. A 2-year-old athletic Golden Retriever has very different nutritional needs from a 10-year-old sedentary Chihuahua with diabetes. Yet many pet food recommendations still ignore breed-specific risks, activity levels, existing health conditions, and individual differences.
That gap is what pushed me to design and build an AI-powered nutrition engine for FAMMO.
The goal was to create a system that can take a detailed pet profile — including breed, age, weight, body condition score, activity level, allergies, and health history — and turn that data into structured nutrition guidance.
In this article, I'm focusing on the backend architecture and AI workflow I built in Django. Some parts are already implemented in the platform, while other parts are designed as the next step for scaling and model evolution.
At a high level, the system is designed to:
- Generate personalized calorie recommendations and macronutrient targets
- Assess preventive health-related risks across multiple categories
- Use OpenAI for structured prediction workflows
- Log predictions for analysis and future model development
- Expose results through a REST API for web and mobile clients
System Overview
The nutrition engine follows a simple pipeline:
User Input → Pet Profile Extraction → AI Engine → Structured Output → API Response
When a user requests a nutrition recommendation, the flow looks like this:
- A client sends pet data through a REST API
- A serializer validates the input and converts it into a
PetProfiledataclass - The AI engine generates a structured nutrition prediction
- The output is normalized into calories, macros, diet style, and risk indicators
- The result can be logged for future analysis and returned to the client
One of the main design goals was backend flexibility.
Rather than locking the system to a single implementation, I designed it so the same API layer can work with different AI backends. That means OpenAI can be used during early iterations, while a proprietary ML backend can be introduced later without changing client-facing API behavior.
Architecture
FAMMO is structured as a modular Django monolith with clear separation of concerns. The AI nutrition workflow lives in its own isolated layer instead of being mixed directly into unrelated app logic.
A simplified structure looks like this:
fammo-backend/
├── ai_core/ # AI prediction engine
│ ├── engine.py # Factory for AI backends
│ ├── interfaces.py # Contracts such as PetProfile and ModelOutput
│ ├── openai_backend.py
│ ├── proprietary_backend.py
│ ├── models.py # Prediction logging
│ ├── serializers.py
│ └── views.py
├── pet/ # Pet management
├── userapp/ # Authentication and user profile
├── aihub/ # Earlier / related AI workflows
├── vets/ # Veterinary-facing logic
├── subscription/ # Usage plans and limits
└── api/ # Public/mobile-facing endpoints
Why This Structure?
The ai_core layer is intentionally separated from the rest of the project so that the prediction system remains easier to test, evolve, and reason about.
This structure supports:
- Testability: the prediction flow is centered around Python dataclasses and interfaces, not tightly coupled Django models
- Extensibility: new AI backends can be added behind a shared contract
- Observability: predictions can be logged with structured input and output snapshots
Key Design Decisions:
-
Interface-based architecture
The prediction engine is built around a shared contract such aspredict(), which allows multiple implementations. -
Factory pattern
The app can select the configured backend based on settings instead of hardcoding one provider everywhere. -
Dataclass-first design
The prediction pipeline uses pure Python types for internal logic, which keeps the AI layer cleaner and easier to test.
Database Design
The core models reflect the actual domain of pet nutrition rather than trying to force all logic into a single table.
Pet Model (Simplified)
class Pet(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
name = models.CharField(max_length=100)
pet_type = models.ForeignKey(PetType, on_delete=models.SET_NULL, null=True)
breed = models.ForeignKey(Breed, on_delete=models.SET_NULL, null=True)
birth_date = models.DateField(null=True, blank=True)
current_age_years = models.PositiveIntegerField(null=True, blank=True)
current_age_months = models.PositiveIntegerField(null=True, blank=True)
weight = models.DecimalField(max_digits=5, decimal_places=2, null=True)
body_type = models.ForeignKey(BodyType, on_delete=models.SET_NULL, null=True)
activity_level = models.ForeignKey(ActivityLevel, on_delete=models.SET_NULL, null=True)
food_allergies = models.ManyToManyField('FoodAllergy', blank=True)
health_issues = models.ManyToManyField('HealthIssue', blank=True)
treat_frequency = models.ForeignKey('TreatFrequency', on_delete=models.SET_NULL, null=True)
Why Normalize Lookup Data?
Instead of hardcoding values such as "Labrador" or "High Activity" inside free-text fields, I used relational lookup tables like Breed and ActivityLevel.
That gives several benefits:
- Consistency: avoids typo-based duplication
- Localization: easier to support multilingual content
- Flexibility: adding breeds or activity levels does not require code changes
- Analytics: easier to group and analyze pets by breed, type, or activity level
Prediction Logging Model
For the AI layer, I designed a logging model that stores prediction snapshots.
class NutritionPredictionLog(models.Model):
created_at = models.DateTimeField(auto_now_add=True, db_index=True)
backend = models.CharField(max_length=50, default="openai")
model_version = models.CharField(max_length=100, blank=True, db_index=True)
species = models.CharField(max_length=20, db_index=True)
life_stage = models.CharField(max_length=20, blank=True, db_index=True)
health_goal = models.CharField(max_length=50, blank=True, db_index=True)
weight_kg = models.FloatField(null=True, blank=True)
input_payload = models.JSONField()
output_payload = models.JSONField()
Why Store Snapshots Instead of Full Relational References?
For AI prediction history, I prefer immutable snapshots.
If a user later changes a pet's weight or health details, historical predictions should still reflect the exact input state that existed at the time of prediction. JSON payloads make that possible while also making export and downstream analysis easier.
This approach also supports future use cases such as:
- building internal analytics
- reviewing output quality over time
- preparing training datasets for future ML models
API Design
The nutrition workflow is exposed through Django REST Framework using a clean request/response contract.
Example Endpoint
POST /api/v1/ai/nutrition/
Example Request
{
"species": "dog",
"breed": "Golden Retriever",
"breed_size_category": "large",
"age_years": 3.5,
"life_stage": "adult",
"weight_kg": 29.0,
"body_condition_score": 4,
"sex": "male",
"neutered": true,
"activity_level": "moderate",
"health_goal": "weight_loss",
"existing_conditions": ["hip_dysplasia"],
"food_allergies": ["chicken"]
}
Example Response
{
"calories_per_day": 780,
"calorie_range_min": 702,
"calorie_range_max": 858,
"protein_percent": 28,
"fat_percent": 12,
"carbohydrate_percent": 40,
"diet_style": "weight_loss",
"diet_style_confidence": 0.87,
"risks": {
"weight_risk": "high",
"joint_risk": "medium",
"digestive_risk": "low",
"metabolic_risk": "medium",
"kidney_risk": "low",
"dental_risk": "low"
},
"meals_per_day": 2,
"portion_size_grams": 195,
"model_version": "gpt-4o-2024-08-06",
"prediction_timestamp": "2025-12-01T14:32:15Z",
"confidence_score": 0.85,
"veterinary_consultation_recommended": false,
"alert_messages": [
"Weight loss target detected - reduce calories by 15-20%",
"Monitor weight weekly and adjust portions as needed"
]
}
Serializer Philosophy
The serializers in this layer act primarily as input and output contracts, not as business logic containers.
class PetProfileSerializer(serializers.Serializer):
species = serializers.ChoiceField(choices=['dog', 'cat'], required=True)
age_years = serializers.FloatField(min_value=0.0, max_value=25.0, required=True)
body_condition_score = serializers.IntegerField(min_value=1, max_value=5, required=True)
def to_pet_profile(self):
return PetProfile(**self.validated_data)
This gives a clean separation between:
- external JSON payloads
- internal strongly-typed Python objects
That separation makes testing much easier because the AI engine can be exercised without needing database objects for every test.
AI Integration
This is the core of the system.
The AI layer is designed around multiple backend possibilities, with OpenAI used as the primary prediction provider and a proprietary model path kept in mind for future evolution.
1. OpenAI Backend with Structured Outputs
For the OpenAI path, I used a schema-based approach so the model returns predictable structured output instead of free-form text.
class NutritionPrediction(BaseModel):
calories_per_day: int
calorie_range_min: int
calorie_range_max: int
protein_percent: int
fat_percent: int
carbohydrate_percent: int
diet_style: str
risks: NutritionRisks
alert_messages: list[str]
The prediction backend follows this pattern:
class OpenAIEngine(NutritionEngineInterface):
def predict(self, pet: PetProfile) -> ModelOutput:
prompt = self._build_prompt(pet)
response = client.responses.parse(
model="gpt-4o-2024-08-06",
input=prompt,
text_format=NutritionPrediction,
)
parsed = response.output_parsed
return self._convert_to_model_output(parsed)
Why This Matters
Earlier AI integrations often break because outputs are inconsistent:
- sometimes JSON
- sometimes prose
- sometimes missing required keys
- sometimes using different field names for the same idea
Using a strict structured schema solves a big part of that problem by making the output much more reliable and easier to validate.
Prompt Strategy
The prompt is designed to provide:
- the pet's structured profile
- nutritional context
- safety constraints
- guidance for risk-aware reasoning
Example:
prompt = f"""You are a professional pet nutrition assistant for FAMMO.
**Pet Profile:**
- Species: {pet.species}
- Breed: {pet.breed} ({pet.breed_size_category})
- Age: {pet.age_years} years ({pet.life_stage})
- Weight: {pet.weight_kg} kg
- Body Condition Score: {pet.body_condition_score}/5
- Activity Level: {pet.activity_level}
- Health Goal: {pet.health_goal}
- Existing Conditions: {", ".join(pet.existing_conditions) or "None"}
- Food Allergies: {", ".join(pet.food_allergies) or "None"}
**Task:**
Generate a structured nutrition prediction including calories, macro balance,
diet style, risk indicators, and practical feeding notes.
**Important:**
- Consider breed-specific predispositions
- Adjust recommendations based on body condition
- Be conservative when a veterinary consultation may be appropriate
"""
2. Proprietary Backend Path
The architecture also leaves room for a future proprietary ML backend.
class ProprietaryEngine(NutritionEngineInterface):
def __init__(self, model_path=None):
self.model_path = model_path or "ml/models/calorie_regressor_v1.pkl"
self._model = None
def predict(self, pet: PetProfile) -> ModelOutput:
if self._model is None:
self._model = joblib.load(self.model_path)
features_df = encode_pet_profile(pet)
calories = int(self._model.predict(features_df)[0])
risks = self._assess_risks(pet)
diet_style = self._map_health_goal_to_diet(pet.health_goal)
return ModelOutput(
calories_per_day=calories,
risks=risks,
diet_style=diet_style,
model_version="proprietary-v1.0.0",
)
If and when enough structured prediction history is available, this kind of backend can become useful for reducing cost, lowering latency, or introducing hybrid AI pipelines.
So rather than saying the system is already fully dependent on proprietary ML, the more accurate statement is this:
the system is architected to support both OpenAI-based inference and future proprietary model backends.
Key Challenges and Solutions
Challenge 1: Inconsistent AI Responses
One of the earliest problems in AI application development is output inconsistency. A response might come back as narrative text in one request and JSON in another.
Solution: schema-driven structured outputs.
Moving to a structured parsing workflow significantly reduced output variability and made downstream validation much simpler.
Challenge 2: Validating Rich, Nested Input Data
Pet nutrition profiles are not simple. They may include dozens of fields, optional values, and cross-field logic.
Examples:
- a cat should not be assigned an unrealistic breed-size category
- certain combinations of body condition score and health goal should be reviewed more carefully
Solution: layered validation.
- DRF serializers handle request validation
- dataclasses handle internal structure
- post-init or internal checks handle cross-field rules
That layered model keeps validation readable and easier to maintain.
Challenge 3: Keeping AI Logic Testable
A common mistake is mixing AI logic directly with database operations.
That quickly leads to code that is harder to test and harder to reuse.
Solution: keep prediction logic pure, and perform database logging at the boundary layer.
That means:
- the engine focuses on prediction
- the API/view layer handles persistence
- the system is easier to test without requiring database-heavy test cases
Challenge 4: Making Recommendations More Context-Aware
Generic nutrition guidance is not enough. Breed, age, body condition, health goals, and medical history all affect useful recommendations.
Solution: richer prompts plus structured risk fields.
By explicitly asking the AI to assess separate risk categories and consider breed-related predispositions, the output becomes more useful than a generic calorie number alone.
Performance and Scalability
This system was designed with production-minded patterns, even though not every scale-oriented feature needs to be activated on day one.
Optimization Patterns Already Reflected in the Design
1. Lazy model loading
class ProprietaryEngine:
def __init__(self):
self._model = None
def predict(self, pet):
if self._model is None:
self._model = joblib.load(self.model_path)
return self._model.predict(...)
2. Indexed prediction logs
class Meta:
indexes = [
models.Index(fields=['-created_at', 'species']),
models.Index(fields=['backend', 'model_version']),
models.Index(fields=['health_goal']),
]
3. Denormalized analytics fields
Storing selected fields such as species and health_goal directly on prediction logs makes filtering and reporting easier without depending on expensive joins.
Future Scaling Paths
As usage grows, the architecture can be extended with:
- Redis caching for repeat or template-like predictions
- request throttling and abuse protection
- asynchronous task queues for heavier workflows
- more advanced observability and performance dashboards
- hybrid OpenAI / proprietary model routing
This is one of the benefits of building the AI layer behind interfaces instead of tightly coupling everything to one provider or one model.
Conclusion
Building FAMMO's AI nutrition engine reinforced something important for me:
AI products are not just about model output — they are mostly about software design, validation, and reliable data flow.
The most valuable lessons from this build were:
- clear data contracts matter
- structured outputs matter
- logging matters
- backend flexibility matters
- validation at boundaries matters
What This Project Taught Me
Use dataclasses for AI workflows
They keep the prediction layer easier to test and evolve.
Log structured snapshots
Good logging is not just debugging — it is future infrastructure.
Design for backend flexibility early
Even if only one provider is active today, a clean interface will save time later.
Validate aggressively at the edges
Bad input should never quietly flow into AI logic.
Start practical, then optimize
A working OpenAI-powered architecture is often the fastest path to proving value before building custom models.
Final Thought
For me, this project was not just about generating meal plans. It was about building an architecture that could support personalized pet nutrition in a more reliable, structured, and scalable way.
That is the part I find most exciting: using Django, APIs, and AI together to turn messy real-world profile data into useful product behavior.
About FAMMO
FAMMO is the product context behind this work — a platform focused on personalized pet nutrition and preventive health support.
If you are building Django applications, AI-powered workflows, or structured recommendation systems, this kind of architecture can be adapted far beyond pet nutrition.
Looking for Django / AI collaboration or consulting?
You can reach me at malek@fammo.ai
Technologies used:
Django, Django REST Framework, OpenAI, Pydantic, PostgreSQL, JWT Authentication, Flutter, Celery, Scikit-learn
Follow my work:
Linkedin