Computer Vision

Experts in computer vision recruitment, we connect top talent with companies that build the future

Self-driving cars, facial recognition systems, sports performance analysis, medical imaging and precision agriculture – computer vision is no longer the technology of tomorrow, it’s a reality of the here and now. Demand for skilled computer vision professionals has never been higher.

Finding candidates with expertise in computer science, Python and C++, digital image processing, deep learning, robotics, and more, is an impossible ask without the right support. This is where DeepRec.ai comes in.

Our specialist recruitment consultants have built the trust and technical knowledge needed to connect job seekers with the best opportunities in this vibrant corner of the tech world.

Why Choose DeepRec.ai?

We’re proud to help our global network of computer vision candidates find fulfilling work, and we’re here to provide the tools to do it. From our dedicated AI community to our events programme and inclusive hiring methodology, our aim is to provide lasting value to the customers and candidates we serve.

We’re part of Trinnovo Group, a B Corp accredited recruitment specialist committed to making a positive impact. Contact the team to find out how we can help you thrive in the computer vision space.

The roles we recruit for in Computer Vision include:

  • Head of Computer Vision
  • Senior Computer Vision Engineer
  • Computer Vision Engineer
  • Senior Machine Learning Engineer Computer Vision 
  • Machine Learning Engineer Computer Vision 
  • Computer Vision Scientist 
  • Computer Vision Researcher

 

COMPUTER VISION CONSULTANTS

Anthony Kelly

Co-Founder & MD EU/UK

Paige Dillingham

Business Manager - Germany & Switzerland

Paddy Hobson

Senior Consultant | DACH

Harry Crick

Consultant | USA

LATEST JOBS

Munich, Bayern, Germany
Senior Machine Learning Engineer
I am hiring for Senior Machine Learning Engineer to lead the development of cutting-edge AI models for Occupant and Driver Monitoring Systems (OMS/DMS).  Job Title: Senior Machine Learning Engineer Location: Munich (Hybrid 1-3 days in office per week) Key ResponsibilitiesComputer Vision & ML Development: Design and develop models for:Object Detection (Person Detection, Child Seat Detection, Gaze Detection)Pose Estimation (Head Pose Estimation, Facial Landmark Detection)Classification & Localization (e.g., identifying and locating phones or objects within the vehicle)Technical Leadership:Lead the technical direction of projects, including setting milestones and ensuring deliveryPlan and review development cycles, mentor team members, and guide research effortsEmbedded Systems Integration:Optimize and port computer vision models to embedded platformsEnsure model compatibility, performance, and efficiency on target hardwareFull ML Pipeline Ownership:Oversee data acquisition, preprocessing, and annotationManage training pipelines and model iteration cycles RequirementsPhD (or equivalent research experience) in Machine Learning, Computer Vision, or a related fieldStrong hands-on experience with Python (essential) and familiarity with C++ (nice to have)Proficient in PyTorch, TensorFlow, and OpenCVProven track record of deploying ML models to embedded systems Nice to HaveExperience with Driver Monitoring Systems (DMS)Experience with GenAI i.e. Diffusions, GANs, etc…
Anthony KellyAnthony Kelly
Berlin, Germany
Scene Understanding Engineer
Our client is building certifiable Level 4 autonomous driving systems for local public transport—designed and developed in Germany. Their mission is to connect people, no matter where they live, by enabling self-determined and sustainable mobility through cognitive artificial intelligence. Their unique approach, rooted in neuroscience and explainable AI, enables real-time decision-making in complex and unknown traffic scenarios—without relying solely on data from millions of kilometres of driving. As a Scene Understanding Engineer, you will play a vital role in shaping the perception and cognition systems that allow our autonomous driver to interpret and interact with its environment. Responsibilities: • Develop and enhance scene understanding algorithms for complex, real-world environments. • Design and implement modular, explainable systems that integrate sensor data and support perception and localization modules. • Lead small development teams and contribute to overall system architecture and software integration. • Collaborate with cross-functional teams to ensure seamless interaction between perception, planning, and control modules. • Participate in testing and validation of autonomous systems in both simulated and real-world environments, including field testing. • Support the certification process by developing traceable and explainable logic for perception systems. Requirements: • Degree in Robotics, Localization, Sensor Fusion, or a related field. • Strong software development skills with C++ and Python. • Proven experience in leading small engineering teams and managing complex software systems. • Solid understanding of model-based design and modular system architecture. • Experience with robotics or autonomous vehicle platforms in real-world or motorsport environments. • Good grasp of deep learning principles, especially as applied to perception. • Fluent in written and spoken English. • Willingness to travel for testing and collaborative projects. • Familiarity with sensor fusion, object fusion, and localization algorithms is a plus. Note: Some technical experience (e.g., deep learning, motorsport testing, or control systems) may be negotiable depending on your background and ability to learn quickly. Why you should join us: • Work in an intellectually stimulating and innovative environment where you can take full ownership of your projects at every stage of development. • Enjoy flat hierarchies, an open culture, and fast decision-making processes. • Collaborate with a skilled and dedicated team eager to share their knowledge and expertise. • Be part of a multinational workplace that values diversity and integrates different backgrounds and perspectives. • Work in the vibrant heart of Berlin, in the dynamic Kreuzberg district.
Paddy HobsonPaddy Hobson
Berlin, Germany
Perception Engineer
Our client is building the next generation of autonomous vehicle systems—ones that don’t just detect the world, but understand it. Inspired by neuroscience and built on advanced AI, they're developing a cognition-first approach to perception that allows our vehicles to reason about complex urban environments. As a Perception Engineer, you'll contribute directly to the real-time perception stack— building systems that transform raw sensor data into a deep understanding of the driving scene. Responsibilities: • Design, develop, and optimize real-time perception algorithms for autonomous driving using data from LiDAR, radar, cameras, and ultrasound. • Implement advanced sensor fusion pipelines combining multi-modal data for robust object detection and classification. • Build and fine-tune deep learning models for semantic segmentation, instance segmentation, and object tracking (e.g., YOLO, Mask R-CNN, DeepSORT). • Process and analyze 3D point cloud data for spatial reasoning and environmental understanding. • Work with tracking and filtering methods such as Kalman filters and Extended Kalman Filters (EKF) for dynamic object tracking. • Integrate and calibrate perception sensors with high-precision requirements (camera, radar, LiDAR). • Simulate and test perception systems in virtual environments (e.g., Carla, AirSim) and validate them in diverse real-world conditions (night, rain, fog). • Collaborate closely with SLAM, mapping, and planning teams to ensure consistent scene representation and performance. Requirements: • Solid background in computer vision and deep learning (CNNs, RNNs, 3D CNNs), with a focus on real-time image and point cloud processing. • Experience with sensor fusion, tracking, and object detection frameworks (YOLO, SSD, Mask R-CNN, etc.). • Skilled in Python/C++ and tools like OpenCV, PCL, TensorFlow, PyTorch, and CUDA. • Familiarity with ROS/ROS2, Carla, SUMO, or other AV simulation frameworks. • Proven ability in calibration and integration of perception sensors; understanding of HD Maps and environmental feature extraction. • Knowledge of SLAM and localization techniques is a strong plus. • Experience with testing and validation of perception systems in safety-critical environments. Nice to Have: • Experience with reinforcement learning or decision-making algorithms in unstructured environments. • Hands-on work with parallel processing (CUDA/OpenCL) and real-time optimization. • Familiarity with HD maps, OpenStreetMap integration, and high-resolution semantic mapping. Why You Should Join Us: • Play a central role in shaping how our vehicles see and interpret the world. • Join an ambitious, science-driven team that values deep collaboration and continuous learning. • Thrive in a flat hierarchy with fast decision-making and real ownership. • Work at the intersection of cutting-edge AI and real-world engineering in the heart of Berlin’s vibrant Kreuzberg.
Paddy HobsonPaddy Hobson
Berlin, Germany
Geospatial Segmentation AI Engineer
Our client is revolutionizing autonomous driving with a unique approach rooted in cognitive neuroscience and cutting-edge German research. Their mission is to make intelligent, explainable decisions in complex traffic environments without relying on massive datasets. As one of the first companies in Germany to pursue level 4 certification for autonomous vehicles, they'refocused on safety, transparency, and scalability to shape the future of mobility—connecting people wherever they are. As a Geospatial Segmentation AI Engineer, you’ll develop the deep learning systems that enable our vehicles to perceive and understand the world through satellite, aerial, and sensor-based imagery. Responsibilities: • Design and implement AI models for semantic and instance segmentation using satellite, drone, and LiDAR imagery. • Develop preprocessing pipelines for geospatial image data using tools like GDAL, Rasterio, and GeoPandas. • Train, evaluate, and fine-tune CNN architectures (U-Net, Mask R-CNN, DeepLab, etc.) for high-resolution remote sensing tasks. • Integrate segmentation outputs into perception systems for urban/rural mapping, land use analysis, and road understanding. • Work with large-scale geospatial datasets (GeoTIFF, NetCDF, shapefiles, GeoJSON) and manage cloud-based geoprocessing workflows. • Build automated data pipelines for labeling, training, and validating geospatial models using tools such as CVAT, Labelbox, and QGIS. • Collaborate with perception, software, and sensor teams to align map data, real-time vision outputs, and spatial AI models. Requirements: • Strong experience with geospatial data processing and GIS tools (e.g., QGIS, GDAL, GeoPandas); understanding of CRS (e.g., WGS84, UTM). • Hands-on expertise in deep learning for image segmentation using frameworks like PyTorch, TensorFlow, or Keras. • Experience with satellite/aerial image analysis and handling raster/vector data formats. • Familiarity with CNN architectures like U-Net, HRNet, DeepLab, and object detection methods (YOLO, Mask R-CNN, etc.). • Proficient in Python and libraries such as OpenCV, scikit-image, Rasterio; experience with Jupyter, Docker, Git. • Knowledge of cloud platforms and services for geospatial processing (e.g., AWS SageMaker, Google Earth Engine) is a plus (negotiable). • Experience with GPU-based training, distributed learning, and handling largescale Earth observation data. Why You Should Join Us: • Work in an intellectually stimulating and innovative environment where you can take full ownership of your projects. • Enjoy flat hierarchies, an open culture, and fast decision-making processes. • Collaborate with a skilled and dedicated team eager to share their knowledge and expertise. • Be part of a multinational workplace that values diversity and integrates different backgrounds and perspectives. • Work in the vibrant heart of Berlin, in the dynamic Kreuzberg district
Paddy HobsonPaddy Hobson
Berlin, Germany
Head of Safety
Our client is revolutionizing autonomous driving with a unique approach rooted in cognitive neuroscience and cutting-edge German research. Their mission is to make intelligent, explainable decisions in complex traffic environments without relying on massive datasets. As one of the first companies in Germany to pursue level 4 certification for autonomous vehicles, we’re focused on safety, transparency, and scalability to shape the future of mobility—connecting people wherever they are. As Head of Safety, you'll play a pivotal role in ensuring their autonomous systems meet the highest safety and regulatory standards from development through deployment. Responsibilities: • Develop and implement comprehensive safety strategies for autonomous vehicle systems in compliance with ISO 26262 and ISO 21448 (SOTIF). • Lead safety assessments, audits, and inspections for hardware, software, and full vehicle systems. • Oversee risk management processes, including FMEA, FTA, and cybersecurity protocols for safety-critical components. • Define and maintain safety architectures across sensor-based systems (LiDAR, radar, cameras) and ensure end-to-end system validation. • Manage safety certification processes aligned with automotive standards (IEC 61508, IATF 16949, Automotive SPICE). • Lead interdisciplinary teams in diagnosing, analyzing, and resolving safetycritical issues, using agile methods like Scrum and Kanban. • Collaborate closely with regulatory authorities, third-party auditors, and internal teams to ensure safety documentation and compliance at all levels. Requirements: • Extensive experience in functional safety, including ISO 26262, ISO 21448 (SOTIF), and IEC 61508. • Proven track record in safety-critical system design, testing, and validation in the automotive or autonomous driving domain. • Strong knowledge of safety-critical standards such as ASIL levels, Automotive SPICE, and ISO/SAE 21434 (cybersecurity). • Expertise in safety test strategies (test automation, simulation, hardware/software validation). • Demonstrated leadership experience managing cross-functional teams and complex projects. • Familiarity with emergency and real-time monitoring systems for autonomous vehicles. • Experience with quality management systems (e.g., TS 16949, IATF 16949) is a plus. • Experience with sensor fusion systems and safety architecture for autonomous vehicles (negotiable). Why You Should Join Us: • Work in an intellectually stimulating and innovative environment where you can take full ownership of your projects. • Enjoy flat hierarchies, an open culture, and fast decision-making processes.• Collaborate with a skilled and dedicated team eager to share their knowledge and expertise. • Be part of a multinational workplace that values diversity and integrates different backgrounds and perspectives. • Work in the vibrant heart of Berlin, in the dynamic Kreuzberg district
Paddy HobsonPaddy Hobson
Zürich, Switzerland
Senior / Staff Applied Research Engineer- VLM/LLM
Searching for Applied Research Engineers to join a Zurich based company working on Humanoid Robotics.This company is working on the technology that allows an LLM promt to translate into Humanoid Robot manipulation. All of the founders hold PhDs in Robotics and Simulation with 15+ years experience working at a global technology leader, they already have a working demo and 3 humanoid robots on-site.The team has an abundance of experience in Robotics, Reinforcement learning, Dexterous Manipulation, Diffusion models and they're now searching for experts in VLM/LLMs.Requirements:PhD in Artificial intelligence (Ideally related to VLMs/LLMs)Practical experience in training and fine-tuning large language models (LLMs), as well as multimodal models like vision-language models (VLMs) and language-conditioned models such as Stable Diffusion or character animation systems.Experience in deploying these models in real-time in the cloud.Robotics experience is NOT needed for this position, we need someone who understands cutting edge generative models - there is already the experience in the team needed to translate this to the world of robotics.If you're interested in bringing your experience to the robotics domain with a fast growing and extremely well backed company then please apply!
Paddy HobsonPaddy Hobson
Paris, Ile De France, France
Video Research Engineer - Diffusion Models
I'm hiring for a Senior AI Research Engineer for an already successful company pivoting into a new domain.  Role: Senior AI Research EngineerSalary: Up to €140,000 + Bonus Location: Remote in Europe or based in Paris Embark on a transformative journey in AI and Video Generation as an Applied Research Engineer/AI Engineer. Focus on developing lifelike human motions and animations without the need for a studio, revolutionising humanoid animation for the future. Areas of Research:Research & develop cutting-edge AI Video Generation technology.Develop AI Character Motion Control using Diffusion Models.Work on realistic Future-oriented humanoid animation. Role Requirements:A minimum of 5 years of hands-on experience in Computer Vision, with a more recent focus Diffusion Models.Extra credit for specialised knowledge in Motion Control. For more information apply with your details and let’s discuss the next steps.
Anthony KellyAnthony Kelly