For over a decade now, OpenNebula Systems has been leading the development of the European open source technology that helps organizations around the world to manage their corporate data centers and build their Enterprise Clouds.
If you want to join an established leader in the cloud infrastructure industry and the global open source community, keep reading, because you can now join a team of exceptionally passionate and talented colleagues whose mission is to help the world's leading enterprises to implement their next-generation edge and cloud strategies. We are hiring!
Since 2019, and thanks to the support from the European Commission, OpenNebula Systems has been leading the edge computing innovation in Europe, investing heavily in research and open source development, and playing a key role in strategic EU initiatives such as the IPCEI-CIS and the “European Alliance for Industrial Data, Edge and Cloud”.
We are currently looking for a Senior Technologist for Artificial Intelligence with expertise in LLM to come and join us in Europe as part of our new team developing the AI-enabled operations component of the next generation management platform for the Cloud-Edge Computing Continuum.
Job Description
The AI Operations Team is responsible for developing the AI-enabled engine to optimize operations on cloud and edge infrastructures. The engine will provide smart monitoring, intelligent workload forecasting, workload and infrastructure orchestration capabilities, and log and metric anomaly detection. The AI Team is also responsible for building new frameworks for AI on the cloud-to-edge continuum enabling different downstreams applications. This covers advanced prompt engineering, prompt development, Retrieval Augmented Generation (RAG), fine-tuning, and instruction-tuning to augment LLMs.
We are looking for an experienced Software Developer with strong understanding of LLMs, AI, AI Ops, and cloud best practices and deployments. The ideal candidate will be product minded and also be experienced in Python back end developing. This role involves staying up to date with the latest research, advancements and techniques for using LLMs and generative AI in the context of Cloud-Edge Operations. AI Engineers combine both an empirical scientific approach to validating and evaluating the accuracy of these outputs, with creative problem-solving and strategy to develop better methods in this novel field of LLM prompting/retrieving, together with data engineering best practices to optimize cloud deployments and operations cost and performance.
You’ll work in an agile environment to design, develop, test, maintain and validate with use cases a next generation management platform for the Cloud-Edge Computing Continuum. You will also participate in the upstream community, on challenging projects developing innovative edge/cloud systems. Applicants should be passionate about the future of the software defined datacenters, distributed systems, and open source.
What you will do
* Maintain a strong understanding of industry trends, existing service and product portfolios, and best practices
* Develop, experiment, evaluate and optimize the use of various foundational LLMs such as GPT-4, Claude-3, Mistral, LLama, etc. for log anomaly detection, incident remediation, troubleshooting and root cause analysis in the context of cloud-edge deployments
* Developing LLM-based solutions like Retrieval Augmented Generations, Reinforcement learning with human-feedback, using frameworks like LangChain, LLama-index, Ludwig, MLFlow, Pytorch
* Fine-tuning open source LLMs and comparing/validating them with public LLM API
* Analyzing functional and systems requirements and key case studies, and build prototypes for solutions to meet the internal demands of technology and services, and external demands of customers
* Meeting with prospects, customers and partners to deeply understand their most difficult challenges
* Defining the design, roadmap and prototype of the cloud-edge architecture for an end-to-end integrated solution for the AI market
* Choosing the appropriate services and technologies for the system, and creating the documentation and diagrams of the AI integrated solution
* Contributing to and leading the development and testing of the AI software components, and their integration, deployment and validation using industry use cases
* Leading the execution and prepare proposals of research and innovation projects related to cloud-edge management and AI
* Serve as an evangelist, contribute to community, and advice in the promotion of solutions for the AI market
* Collaborating with other companies in the cloud-edge ecosystem within international projects, open-source communities and standardization bodies. Availability to occasional travel and participation in international events and meetings
* Writing and maintaining software documentation and project reports
* Creating an inspiring team environment with an open communication culture
What you will bring
* Bachelor’s or Master’s degree in Mathematics, Computer Science, Software Engineering, or a related field
* 3+ years of hands-on experience in AI/ML, LLM and cloud systems development and integration using open-source technologies
* Demonstrated expertise in researching, developing, and implementing AI/ML and LLM algorithms for predictive analytics, workload optimization, log analysis, and anomaly detection to enhance system performance trends.
* Proficiency in designing, developing and maintaining Python code used for AI/ML,data processing and in general software engineering tasks.
* Extensive experience in using various LLM/NLP models, libraries and frameworks (GPT-4, LangChain, LLama-index), Python ML frameworks (Pytorch, TensorFlow, Keras).
* Passionate and excited about the use of generative AI and LLMs in revolutionizing the world of work, taking pride in developing tools with it that will solve real world problems and drive mass efficiency savings.
* A quick learner, independent thinker, and creative problem-solver who keeps up to speed with the latest developments and research on LLMs, and adapt their work appropriately
* Experience with Cloud Management technologies and their associated technologies. Understand the implication of managing virtualized infrastructures and the orchestration of the underlying subsystems.
What's in it for me?
Some of our benefits and perks vary depending on location and employment type, but we are proud to provide employees with the following;
* Competitive compensation package and flexible remuneration: Meals, Transport, Nursery/Childcare
* Customized workstation (macOS, Windows, Linux)
* Private health insurance
* Paid time off: Holidays, Personal Time, Sick Time, Parental leave
* Afternoon-off working day every friday and during summer
* Remote company with bright HQ centrally located in Madrid; offices in Boston (USA), Brussels (Belgium) and Brno (Czech Republic); and access to office space near your location when needed. During the first year, for onboarding purposes, and for participation on certain projects, employees should be able to attend events and face-to-face meetings in our Madrid offices and other European cities. All employees are also required to attend our company-wide face-to-face all-hands meetings twice a year
* Healthy work-life balance: We encourage the right for Digital Disconnecting and promote harmony between employees personal and professional lives
* Flexible hiring options: Full Time/Part Time, Employee (Spain/USA) / Contractor (other locations)
* We are building an awesome, Engineering First Culture and your opinion matters: Thrive in the high-energy environment of a young company where openness, collaboration, risk-taking, and continuous growth are valued
* Be exposed to a broad technology ecosystem. We encourage learning and researching new technologies and methods as part of your everyday duties