Career opportunities at our portfolio startups

Crawling Engineer
Veridion
You will work within our Engineering team to design and scale systems that extract structured data from the unstructured web. Your work directly impacts the coverage and depth of our datasets, powering product intelligence and automation at scale.

Deeptech Engineer – Intern
Veridion
You will work with some of the brightest minds in the industry, in a high performing, no BS, growth oriented culture on challenges that haven’t been solved before anywhere in the world at the forefront of Big Data, Machine Learning & Infrastructure, adding your own input on core features, of a groundbreaking product that’s already shaping the future.

Sr. Backend Engineer – Platform & Cloud Systems
Veridion
You will help scale our client-facing APIs by building clean, distributed, and resilient services on modern infrastructure. You’ll own APIs from code to prod, understand how to scale distributed data systems like Elasticsearch or Postgres, and enforce engineering best practices across CI/CD, security, and system design. All while operating in a real-world engineering environment: shipping fast, reliably, and responsibly.

Data Delivery Engineer
Veridion
You will play an essential role within our Data Delivery team, ensuring we meet the evolving data needs of our clients, while continuously enhancing data delivery processes.

ML Engineer
Veridion
You will craft custom ML-based solutions, focused on NLP and text processing, then fit them on TBs of data. You will work with cutting-edge architectures and techniques, then deploy your models in a massively parallel Big Data environment.

Software Engineer
Veridion
You will work within our Product Engineering team to define the best technical solution, directly impacting the shape of our products, bringing new ideas to life. You’ll think through all the details of the implementation, as well as different scenarios of product growth and scalability.

Big Data Engineer
Veridion
You will develop the data extraction and processing mechanisms inside our big data infrastructure (Spark, Cassandra), which currently supports the processing of 6GB of data every minute.