Skip to main content

Performance Optimization

This guide covers essential techniques to optimize your JifiJs application for maximum performance, scalability, and efficiency.

🎯 Performance Goals​

Target metrics for a well-optimized API:

  • Response Time: < 100ms for cached requests, < 500ms for database queries
  • Throughput: 1000+ requests/second per server instance
  • Database Queries: < 50ms average
  • Memory Usage: < 512MB per instance
  • CPU Usage: < 70% under normal load

πŸš€ Caching Strategies​

1. Redis Caching​

JifiJs includes built-in Redis caching:

import { CacheService } from './services/cache.service';

export class UserService {
async getUser(id: string) {
const cacheKey = `user:${id}`;

// Try cache first
const cached = await CacheService.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}

// Query database
const user = await User.findById(id);

// Cache for 1 hour
await CacheService.set(cacheKey, JSON.stringify(user), 3600);

return user;
}
}

2. Cache Invalidation​

Invalidate cache on updates:

export class UserService {
async updateUser(id: string, data: Partial<IUser>) {
const user = await User.findByIdAndUpdate(id, data, { new: true });

// Invalidate cache
await CacheService.del(`user:${id}`);
await CacheService.del(`users:list`); // List cache

return user;
}
}

3. Cache Patterns​

Cache-Aside (Lazy Loading)

async function getData(key: string) {
// 1. Check cache
let data = await cache.get(key);
if (data) return data;

// 2. Load from database
data = await database.find(key);

// 3. Update cache
await cache.set(key, data);

return data;
}

Write-Through

async function saveData(key: string, data: any) {
// 1. Write to database
await database.save(key, data);

// 2. Update cache
await cache.set(key, data);
}

Write-Behind (Async)

async function saveData(key: string, data: any) {
// 1. Write to cache immediately
await cache.set(key, data);

// 2. Queue database write
await queue.add('db-write', { key, data });
}

πŸ—„οΈ Database Optimization​

1. Indexing​

Create indexes for frequently queried fields:

// user.model.ts
import { Schema, model } from 'mongoose';

const userSchema = new Schema({
email: {
type: String,
required: true,
unique: true,
index: true, // ← Single field index
},
username: {
type: String,
required: true,
index: true,
},
createdAt: {
type: Date,
default: Date.now,
index: true,
},
});

// Compound index for common queries
userSchema.index({ email: 1, status: 1 });

// Text search index
userSchema.index({ username: 'text', bio: 'text' });

export const User = model('User', userSchema);

2. Query Optimization​

Use Projection - Select only needed fields:

// ❌ Bad - Fetches all fields
const users = await User.find({});

// βœ… Good - Fetch only needed fields
const users = await User.find({}).select('name email avatar');

Use Lean - Return plain JavaScript objects:

// ❌ Bad - Returns Mongoose documents (slower)
const users = await User.find({});

// βœ… Good - Returns plain objects (faster)
const users = await User.find({}).lean();

Limit Results:

// ❌ Bad - Fetch all users
const users = await User.find({});

// βœ… Good - Paginate
const users = await User.find({})
.limit(20)
.skip(page * 20);

3. Aggregation Optimization​

// ❌ Bad - Multiple queries
const users = await User.find({});
const stats = {
total: users.length,
active: users.filter(u => u.status === 'active').length,
premium: users.filter(u => u.tier === 'premium').length,
};

// βœ… Good - Single aggregation
const stats = await User.aggregate([
{
$group: {
_id: null,
total: { $sum: 1 },
active: {
$sum: { $cond: [{ $eq: ['$status', 'active'] }, 1, 0] }
},
premium: {
$sum: { $cond: [{ $eq: ['$tier', 'premium'] }, 1, 0] }
},
},
},
]);

4. Connection Pooling​

Configure MongoDB connection pool:

// config/database.ts
import mongoose from 'mongoose';

mongoose.connect(process.env.MONGODB_URI, {
maxPoolSize: 50, // Maximum connections
minPoolSize: 10, // Minimum connections
socketTimeoutMS: 45000, // Close sockets after 45 seconds
serverSelectionTimeoutMS: 5000,
family: 4, // Use IPv4
});

⚑ Application-Level Optimization​

1. Compression​

Enable gzip compression:

import compression from 'compression';

app.use(compression({
level: 6, // Compression level (0-9)
threshold: 1024, // Only compress responses > 1KB
filter: (req, res) => {
// Don't compress images
if (req.headers['x-no-compression']) {
return false;
}
return compression.filter(req, res);
},
}));

2. Response Streaming​

Stream large responses instead of buffering:

export class ReportController {
async downloadReport(req: Request, res: Response) {
const reportStream = await ReportService.generateLargeReport(req.params.id);

res.setHeader('Content-Type', 'text/csv');
res.setHeader('Content-Disposition', 'attachment; filename=report.csv');

// Stream directly to response
reportStream.pipe(res);
}
}

3. Parallel Processing​

Execute independent operations in parallel:

// ❌ Bad - Sequential (slow)
const user = await User.findById(id);
const posts = await Post.find({ userId: id });
const comments = await Comment.find({ userId: id });

// βœ… Good - Parallel (fast)
const [user, posts, comments] = await Promise.all([
User.findById(id),
Post.find({ userId: id }),
Comment.find({ userId: id }),
]);

4. Lazy Loading​

Load expensive data only when needed:

export class UserService {
async getUser(id: string, includeStats = false) {
const user = await User.findById(id).lean();

// Only fetch stats if requested
if (includeStats) {
user.stats = await this.getUserStats(id);
}

return user;
}
}

πŸ”„ Async/Background Processing​

1. Queue Heavy Tasks​

Use Bull queues for expensive operations:

// ❌ Bad - Block the request
app.post('/api/export', async (req, res) => {
const data = await generateLargeExport(req.user.id);
res.json(data); // User waits for entire export
});

// βœ… Good - Queue the job
app.post('/api/export', async (req, res) => {
const job = await exportQueue.add({
userId: req.user.id,
format: req.body.format,
});

res.json({
message: 'Export started',
jobId: job.id,
statusUrl: `/api/export/status/${job.id}`,
});
});

2. Batch Operations​

Group multiple operations:

// ❌ Bad - Multiple database calls
for (const userId of userIds) {
await sendWelcomeEmail(userId);
}

// βœ… Good - Batch processing
const users = await User.find({ _id: { $in: userIds } });
await Promise.all(users.map(user => sendWelcomeEmail(user)));

🌐 Network Optimization​

1. HTTP/2​

Enable HTTP/2 for multiplexing:

import spdy from 'spdy';
import fs from 'fs';

const options = {
key: fs.readFileSync('./ssl/server.key'),
cert: fs.readFileSync('./ssl/server.crt'),
};

spdy.createServer(options, app).listen(3000);

2. CDN for Static Assets​

Use CDN for static files:

// config/cdn.ts
export const getCDNUrl = (path: string) => {
if (process.env.NODE_ENV === 'production') {
return `${process.env.CDN_URL}/${path}`;
}
return `/static/${path}`;
};

// Usage
const avatarUrl = getCDNUrl(`avatars/${user.id}.jpg`);

3. ETag / Conditional Requests​

import express from 'express';

app.set('etag', 'strong'); // Enable strong ETags

// Manual ETag for API responses
app.get('/api/data', async (req, res) => {
const data = await fetchData();
const etag = generateETag(data);

res.setHeader('ETag', etag);
res.setHeader('Cache-Control', 'public, max-age=300');

// Check if client has current version
if (req.headers['if-none-match'] === etag) {
return res.status(304).end();
}

res.json(data);
});

πŸ’Ύ Memory Optimization​

1. Avoid Memory Leaks​

// ❌ Bad - Creates memory leak
const cache = {};
app.get('/api/data/:id', (req, res) => {
cache[req.params.id] = fetchData(req.params.id);
res.json(cache[req.params.id]);
});

// βœ… Good - Use proper cache with TTL
import NodeCache from 'node-cache';
const cache = new NodeCache({ stdTTL: 600, maxKeys: 1000 });

app.get('/api/data/:id', (req, res) => {
let data = cache.get(req.params.id);
if (!data) {
data = fetchData(req.params.id);
cache.set(req.params.id, data);
}
res.json(data);
});

2. Stream Large Files​

// ❌ Bad - Loads entire file into memory
const file = await fs.readFile('large-file.txt');
res.send(file);

// βœ… Good - Stream the file
const stream = fs.createReadStream('large-file.txt');
stream.pipe(res);

3. Pagination​

// ❌ Bad - Loads all records
const users = await User.find({});

// βœ… Good - Paginated
const page = parseInt(req.query.page) || 1;
const limit = 20;

const users = await User.find({})
.limit(limit)
.skip((page - 1) * limit)
.lean();

const total = await User.countDocuments();

res.json({
users,
pagination: {
page,
limit,
total,
pages: Math.ceil(total / limit),
},
});

πŸ“Š Monitoring & Profiling​

1. Response Time Tracking​

import responseTime from 'response-time';

app.use(responseTime((req, res, time) => {
// Log slow requests
if (time > 1000) {
logger.warn('Slow request', {
method: req.method,
url: req.url,
duration: time,
});
}

// Track metrics
metrics.timing('http.response_time', time, {
method: req.method,
path: req.route?.path,
});
}));

2. Memory Monitoring​

import { memoryUsage } from 'process';

setInterval(() => {
const usage = memoryUsage();

metrics.gauge('memory.heap_used', usage.heapUsed);
metrics.gauge('memory.heap_total', usage.heapTotal);
metrics.gauge('memory.rss', usage.rss);

// Alert if memory usage is high
if (usage.heapUsed > 400 * 1024 * 1024) { // > 400MB
logger.warn('High memory usage', usage);
}
}, 60000); // Every minute

3. Database Query Profiling​

// Enable query logging in development
if (process.env.NODE_ENV === 'development') {
mongoose.set('debug', (collectionName, method, query, doc) => {
logger.debug('MongoDB Query', {
collection: collectionName,
method,
query,
});
});
}

πŸš€ Production Optimizations​

1. Cluster Mode​

Use all CPU cores:

// server.ts
import cluster from 'cluster';
import os from 'os';

if (cluster.isPrimary) {
const numCPUs = os.cpus().length;

console.log(`Master ${process.pid} is running`);

// Fork workers
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}

cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`);
cluster.fork(); // Restart worker
});
} else {
// Worker processes
require('./app');
console.log(`Worker ${process.pid} started`);
}

2. PM2 Configuration​

// ecosystem.config.js
module.exports = {
apps: [{
name: 'jifijs-api',
script: './dist/server.js',
instances: 'max', // Use all CPU cores
exec_mode: 'cluster',
max_memory_restart: '512M', // Restart if memory > 512MB
env: {
NODE_ENV: 'production',
},
error_file: './logs/err.log',
out_file: './logs/out.log',
merge_logs: true,
autorestart: true,
watch: false,
}],
};

3. Load Balancing​

Nginx configuration:

upstream jifijs_backend {
least_conn;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}

server {
listen 80;
server_name api.example.com;

# Gzip compression
gzip on;
gzip_types text/plain application/json;

# Cache static files
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}

# Proxy to Node.js
location / {
proxy_pass http://jifijs_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

πŸ“ˆ Performance Checklist​

  • Caching: Implemented Redis caching for frequent queries
  • Database Indexes: Created indexes for all query fields
  • Query Optimization: Using lean(), select(), and pagination
  • Compression: Enabled gzip compression
  • Connection Pooling: Configured database pool size
  • Rate Limiting: Protected expensive endpoints
  • Background Jobs: Using queues for heavy tasks
  • Monitoring: Tracking response times and errors
  • CDN: Serving static assets from CDN
  • Cluster Mode: Running multiple instances
  • Load Balancer: Nginx or similar in front
  • HTTP/2: Enabled for multiplexing

πŸ”§ Benchmarking​

Load Testing with Artillery​

# artillery.yml
config:
target: "http://localhost:3000"
phases:
- duration: 60
arrivalRate: 10
name: "Warm up"
- duration: 120
arrivalRate: 50
name: "Sustained load"
- duration: 60
arrivalRate: 100
name: "High load"

scenarios:
- name: "Get users"
flow:
- get:
url: "/api/users"
capture:
- json: "$.data[0].id"
as: "userId"
- get:
url: "/api/users/{{ userId }}"

Run tests:

npm install -g artillery
artillery run artillery.yml

πŸ’‘ Pro Tip: Always measure before optimizing. Use profiling tools to identify actual bottlenecks rather than guessing. Premature optimization is the root of all evil!