Data Engineer

All locations
Technology – Engineering /
CDI - Permanent /
Hybrid

About The Role

You will be joining a fast-paced engineering team made up of people with significant experience working with terabytes of data. We believe that everybody has something to bring to the table, and therefore put collaborative effort and team-work above all else (and not just from an engineering perspective).
You will be able to work autonomously as an equally trusted member of the team, and participate in efforts such as:
Addressing high availability problems: cross-region data replication, disaster recovery, etc.
Addressing “big data” problems: 200+ millions of messages/day, 160B data points since 2010 
Improving our development workflow, continuous integration, continuous delivery and in a broader sense our team practices
Expanding our platform’s observability through monitoring, logging, alerting and tracing


What you’ll be doing:

    • Design, develop and deploy scalable and observable backend microservices
    • Reflect on our storage, querying and aggregation capabilities, as well as the technologies required to meet our objectives
    • Work hand-in-hand with the business team on developing new features, addressing issues and extending the platform

Our tech stack:

    • Monitoring: VictoriaMetrics, Grafana
    • Alerting: Alert Manager, Karma, Pager Duty
    • Logging: Vector, Loki
    • Caching: FoundationDB, Redis
    • Secrets management and PKI: Vault
    • Configuration management and provisioning: Terraform, Ansible
    • Service discovery: Consul
    • Messaging: Kafka
    • Proxying: HAProxy, Traefik
    • Service deployment: Terraform, Nomad (plugged in Consul and Vault), Kubernetes (to a lesser extent, used for non production critical workloads)
    • Database systems: ClickHouse (main datastore), PostgreSQL (ACID workloads)
    • Protocols: gRPC, HTTP (phasing out in favor of gRPC), WebSocket (phasing out in favor of gRPC)Platforms (packaged in containers): Golang, NodeJS (phasing out in favor of Golang), Ruby (phasing out in favor of Golang)

About You:

    • Significant experience as a Software/Data/DevOps Engineer
    • Knowledgeable about data ingestion pipelines and massive data querying
    • Worked with, in no particular order: microservices architecture, infrastructure as a code, self-managed services (eg. deploy and maintain our own databases), distributed services, server-side development, etc

Nice to have

    • Experience with data scraping over HTTP, WebSocket, and/or FIX Protocol
    • Experience developing financial product methodologies for indices, reference rates, and exchange rates
    • Knowledgeable about the technicalities of financial market data, such as the difference between: calls, puts, straddles, different types of bonds, swaps, CFD, CDS, options, futures, etc
Location: Paris (hybrid)
Type of contract: CDI

What we offer 
25 paid holidays & RTTs
The hardware of your choice
Great health insurance (Alan)
Meal vouchers (Swile)
Contribution to your monthly gym subscription
Contribution to daily commuting
Remote-friendly  
Multiple team events (annual retreat, casual drinks, etc.)
An entrepreneurial environment with a lot of autonomy and responsibilities

Talent Acquisition Process

●   Call with the People team (20 min)    
●   Interview with the Hiring Manager (45 min)  
●   Technical test / Business Case (1h in meet)    
●   Cross team interviews with 2-3 team members (30 min)    
●   Offer, reference check

Diversity & Inclusion
At Kaiko, we believe in the diversity of thought because we appreciate that this makes us stronger.  Therefore, we encourage applications from everyone who can offer their unique experience to our collective achievements.