InPost Group is an innovative European out of home deliveries company, revolutionizing the way parcels are delivered to customers. With operations across several countries, our network of intelligent lockers provides customers with a fast, convenient, and secure delivery option. InPost Group is a publicly traded company, with a market capitalization of about $5 billion as of March 2023. With over 10,000 employees worldwide, InPost Group is one of the largest out of home delivery providers in Europe, committed to providing sustainable and efficient delivery solutions to meet the evolving needs of customers in today's rapidly changing landscape.
We are seeking a skilled and innovative AI Engineer who has experience working with Generative AI (GenAI) models, such as Large Language Models (LLMs), and implementing these solutions into web applications. This role combines the responsibilities of a full-stack developer with advanced knowledge of machine learning models, APIs, and cloud infrastructure, focusing on integrating state-of-the-art AI capabilities into user-friendly and efficient web-based applications.
Key Responsibilities:
Model Integration & API Development:
- Integrate LLMs and other GenAI models into web applications through efficient API design and implementation.
- Build and optimize API endpoints to allow seamless, real-time interaction between front-end applications and back-end AI models.
- Design and develop secure, scalable, and high-performing microservices for AI model deployment.
Back-End Development & AI Pipelines:
- Develop robust back-end systems in Python to support the deployment, scalability, and maintenance of GenAI models.
- Build and maintain data pipelines, including preprocessing data and post-processing AI model outputs for consumption by applications.
- Implement best practices for handling sensitive data and maintaining model performance.
Infrastructure & Deployment:
- Use Kubernetes and Docker for containerization and orchestration to ensure scalable deployment and management of AI applications.
- Implement continuous integration and continuous deployment (CI/CD) pipelines for automated testing and deployment of code changes.
- Maintain a scalable and secure cloud infrastructure, leveraging platforms such as Google Cloud Platform, or Azure for model training, storage, and deployment.
LLM and GenAI Ecosystem Expertise:
- Utilize vector databases (e.g., Pinecone, Weaviate, or Faiss) to manage and retrieve embeddings for efficient similarity search and recommendation systems.
- Work with frameworks for model development and deployment, including Hugging Face Transformers, LangChain, and OpenAI.
- Optimize and fine-tune LLMs to improve performance based on application needs.