AI Software Engineer – Python, LLM Integrations & Scalable Systems

🌍 Remote, USA 🎯 Full-time 🕐 Posted Recently

Job Description

We are seeking a hands-on AI Software Engineer to design, build, and deploy intelligent backend systems that power conversational AI, automation, and data-driven decision engines. You’ll collaborate with data scientists, ML engineers, and product teams to integrate LLM-based models (OpenAI, Anthropic, Meta Llama, etc.) into scalable microservices and internal tools. Key Responsibilities • Design and develop Python-based backend systems supporting AI/LLM workflows, APIs, and data pipelines. • Build scalable microservices and vector-database integrations (e.g., Milvus, Pinecone, FAISS) for retrieval-augmented generation (RAG) pipelines.

• Integrate and orchestrate LLMs using APIs (OpenAI, Anthropic, Hugging Face, vLLM, Triton, or similar). • Work closely with data engineering to optimize data ingestion, preprocessing, and embeddings pipelines. • Implement asynchronous and distributed processing (Celery, Kafka, or Ray). • Deploy and monitor services on Docker/Kubernetes with bolthires/CD pipelines (GitHub Actions, Jenkins, or GitLab bolthires). • Maintain documentation, testing, and model performance metrics. • Collaborate with DevOps and security to ensure safe and reliable AI deployments.

Required Skills & Experience • 3+ years experience in backend or full-stack development with Python (FastAPI, Flask, or Django). • Proven experience integrating AI/ML or NLP systems (LLMs, embeddings, transformers, etc.). • Strong understanding of RESTful and async APIs, data serialization, and model inference optimization. • Familiarity with vector databases (Milvus, Pinecone, FAISS, Weaviate) and document chunking/embedding techniques. • Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, Redis).

• Hands-on with Docker, Kubernetes, and cloud environments (AWS / GCP / Azure). • Knowledge of MLOps workflows (model packaging, inference serving, versioning). • Experience with Git, bolthires/CD, and automated testing. Nice to Have • Familiarity with AI voice technologies (Riva, ElevenLabs, VAPI SDK, or similar). • Experience with LangChain, LlamaIndex, or Haystack for RAG pipelines. • Exposure to NVIDIA Triton / TensorRT-LLM / vLLM for high-performance inference. • Understanding of prompt engineering, retrieval evaluation, and fine-tuning pipelines.

• Experience contributing to open-source AI frameworks. Why Join Us • Build real AI products — from voice agents to LLM-powered automation systems — not just prototypes. • Work with a high-performance engineering team using NVIDIA hardware and cutting-edge open-source tools. • 100% remote flexibility, cross-functional collaboration, and ownership of critical AI systems. Apply tot his job

Ready to Apply?

Don't miss out on this amazing opportunity!

🚀 Apply Now

Similar Jobs

Recent Jobs

You May Also Like