Software Engineer

Paris
Technology – Engineering /
CDI - Permanent /
Hybrid
Kaiko is the leading source of cryptocurrency market data, providing businesses with industrial-grade and regulatory-compliant data. Kaiko empowers market participants with global connectivity to real-time and historical data feeds across the world's leading centralized and decentralized cryptocurrency exchanges. Kaiko’s proprietary products are built to empower financial institutions and cryptocurrency businesses with solutions ranging from portfolio valuation to strategy backtesting, performance reporting, charting, analysis, indices, and pre-and post-trade.

What We Do
Kaiko provides financial data products and solutions, across three main business units: 

1. Market Data: “CEX” Centralized Exchanges Market Data: we collect, structure and distribute market data from 100+ cryptocurrency trading venues; “DEX” Decentralized Protocols Market Data: we run blockchain infrastructure in order to read, collect, engineer and distribute venue-level market data from DeFI protocols.

2. Analytics: proprietary quantitative models & data solutions to price and assess risk.

3. Indices: suite of mono-assets rates and benchmarks, as well as cross-assets indices.


Kaiko’s products are available worldwide on all networks and infrastructures: public APIs, private & on-premises networks; private & hybrid cloud set-ups; blockchain native (Kaiko oracles solution).

Additionally, Kaiko’s Research publications are read by thousands of industry professionals and cited in the world’s leading media organizations. We provide original insights and in-depth analysis of crypto markets using Kaiko’s data and products.


Who We Are 
We’re a team of 80 (and growing) passionate individuals with a deep interest in building data solutions and supporting the growth of the digital finance economy.  We’re proud of Kaiko’s talented team and are committed to our international representation and diversity. Our people and their values are the foundation of our continued success.


About The Role

You will be joining a fast-paced engineering team made up of people with significant experience working with terabytes of data. We believe that everybody has something to bring to the table, and therefore put collaborative effort and team-work above all else (and not just from an engineering perspective).
You will be able to work autonomously as an equally trusted member of the team, and participate in efforts such as:
Addressing high availability problems: cross-region data replication, disaster recovery, etc.
Addressing “big data” problems: 200+ millions of messages/day, 160B data points since 2010 
Improving our development workflow, continuous integration, continuous delivery and in a broader sense our team practices
Expanding our platform’s observability through monitoring, logging, alerting and tracing


What you’ll be doing:

    • Design, develop and deploy scalable and observable backend microservices
    • Reflect on our storage, querying and aggregation capabilities, as well as the technologies required to meet our objectives
    • Work hand-in-hand with the business team on developing new features, addressing issues and extending the platform

Our tech stack:

    • Platforms (packaged in containers): Golang but we also recently started Rust for some specific use cases
    • Protocols: gRPC, HTTP (phasing out in favor of gRPC), WebSocket (phasing out in favor of gRPC)
    • Database systems: ClickHouse (main datastore), PostgreSQL (ACID workloads), ScyllaDB
    • Messaging: Kafka
    • Caching: Redis
    • Configuration management and provisioning: Terraform, Ansible
    • Service deployment: Terraform, Nomad (plugged in Consul and Vault), Kubernetes
    • Secrets management and PKI: Vault
    • Service discovery: Consul
    • Proxying: HAProxy, Traefik
    • Monitoring: VictoriaMetrics, Grafana
    • Alerting: AlertManager, Pager
    • DutyLogging: Vector, Loki

About You:

    • Significant experience as a Software/DevOps Engineer
    • Knowledgeable about data ingestion pipelines and massive data querying
    • Worked with, in no particular order: microservices architecture, infrastructure as a code, self-managed services (eg. deploy and maintain our own databases), distributed services, server-side development, etc
    • You’ll notice that we don’t have any “hard” requirements in terms of development platforms or technologies: this is because we are primarily interested in people capable of adapting to an ever changing landscape of technical requirements, who learn fast and are not afraid to constantly push our technical boundaries.It is not uncommon for us to benchmark new technologies for a specific feature, or to change our infrastructure in a big way to better suit our needs.The most important skills for us revolve around two things:What we like to call “core” knowledge: what’s a software process, how does it interact with a machine’s or the network’s resources, what kind of constraints can we expect for certain workloads, etcHow fast you can adapt to a technology you didn’t know existed 10 minutes ago
      In short, we are looking for someone able to spot early on that spending 10 days to migrate data to a more efficient schema is the better solution compared to scaling out a database cluster in a matter of minutes if we are looking to improve performance in the long term.

Nice to have

    • Experience with data scraping over HTTP, WebSocket, and/or FIX Protocol
    • Experience developing financial product methodologies for indices, reference rates, and exchange rates
    • Knowledgeable about the technicalities of financial market data, such as the difference between: calls, puts, straddles, different types of bonds, swaps, CFD, CDS, options, futures, etc
Location: Paris (hybrid)
Type of contract: CDI

What we offer 
25 paid holidays & RTTs
The hardware of your choice
Great health insurance (Alan)
Meal vouchers (Swile)
Contribution to your monthly gym subscription
Contribution to daily commuting
Remote-friendly  
Multiple team events (annual retreat, casual drinks, etc.)
An entrepreneurial environment with a lot of autonomy and responsibilities

Talent Acquisition Process

●   Call with the People team (20 mins)
● Interview with the CTO (30 mins- 1h)          
●   Tech discussion with the members of the team (1h30)  
●   Cross team interviews with 2-3 team members (45mins - 1h)
●   Offer, reference check

Diversity & Inclusion
At Kaiko, we believe in the diversity of thought because we appreciate that this makes us stronger.  Therefore, we encourage applications from everyone who can offer their unique experience to our collective achievements.