Building a privacy-focused AI agent for automating professional profile creation across multiple platforms
April 2025 - Present • 2 months
Personal Project
Ongoing
2 months
Python, Ollama, Flask, Docker
The Profile Builder project addresses a common challenge faced by professionals: maintaining consistent, high-quality profiles across multiple freelance platforms and job boards. Instead of manually writing and optimizing profiles for each platform, this tool automates the process while preserving data privacy by using local LLM models rather than sending sensitive information to external APIs.
This project was driven by several key motivations:
For this project, I selected a stack that balances performance, privacy, and ease of deployment:
The project is built with a modular architecture consisting of several key components:
A key challenge was integrating with Ollama to run the deepseek-r1 model locally. This involved:
# Example of generating a professional title with the LLMdef generate_title(self, portfolio_data, platform="general"): """Generate a professional title based on portfolio data.""" prompt = f""" Create a compelling professional title for a {platform} profile based on the following information. The title should be concise, impactful, and highlight key expertise. Experience: {portfolio_data.get('experience', [])}Skills: {portfolio_data.get('skills', {})}Professional Title: """ title = self.client.generate(prompt, max_tokens=50, temperature=0.7) return title.strip()
Different platforms have different content requirements and expectations. The system recognizes key platforms and tailors the content generation:
The web interface was designed to be straightforward and user-friendly:
All data remains on the user's device with no external API calls
Optimized content generation for various freelance platforms and job boards
Automatically pulls relevant information from existing portfolio websites
Easy setup with Docker for consistent environment across systems
Running large language models locally presents performance challenges, especially on systems with limited resources.
Implemented model parameter optimization for the deepseek-r1 model in Ollama, finding the right balance between quality and performance. Used techniques like context window management and response streaming to improve user experience even on systems with limited resources.
Portfolio websites vary greatly in structure, making consistent data extraction challenging.
Developed a flexible portfolio extractor that can identify common patterns in professional websites. Implemented fallback mechanisms when specific data points cannot be extracted, ensuring the system can still function with partial information.
Getting consistent, high-quality outputs from LLMs requires careful prompt design.
Created a library of specialized prompts for different profile sections and platforms. These prompts include specific guidance on tone, structure, and content expectations for each platform, resulting in more consistent and appropriate outputs.
The Profile Builder with Local LLM project demonstrates how locally-run AI models can deliver powerful, privacy-preserving solutions for practical problems. By combining modern LLM technology with targeted prompting and a user-friendly interface, the tool streamlines the often tedious process of maintaining professional profiles across multiple platforms.
This project also highlights the growing maturity of local LLM solutions, offering an alternative to cloud-based APIs for tasks that involve sensitive personal or professional information. As local models continue to improve in capabilities and efficiency, we can expect to see more applications taking this approach.
The code for this project is available on GitHub, with the aim of contributing to the growing ecosystem of privacy-focused, local AI solutions.
The full source code for this project is available on GitHub. Feel free to explore, contribute, or customize it for your own needs.
View on GitHub