- Who we are: We are a workforce on-demand company that heavily relies on tech. We integrate the complete process of selecting, hiring, and managing employees, being the first in digitizing all the steps, introducing Artificial Intelligence solutions in multiple phases, obtaining faster and higher-quality service for our clients, and great satisfaction and loyalty for our workers.
- Our Impact on Society: Our vision is to create a long-term job security condition for our workers while providing flexible solutions to our clients. We can do this by concatenating different short-term contracts and minimizing–or completely removing–their unemployed time. We want to remove the need for finding a job ever again.
- Data-Driven: Being the owner of the entire hiring funnel, workforce management, and working experience, give us very unique data that we transform into powerful knowledge to make better decisions, improve job-candidate matches, estimate quality and affinity, and share performance feedback with our workers and clients. We are the most advanced workforce on-demand company thanks to our data and the way we use it.
- Great Challenges for the Data Team: Integrating very different AI solutions to improve the experience for both our workers and employers (Shift Optimization, Machine Learning algorithms, Quality Score Estimations, Document recognition, etc.), relying on a very powerful ETL that permits endless automatizations while keeping the focus on data quality.
- Growth Overview: An annual growth of 170%, available in multiple countries: the UK, Spain, Germany, Sweden, Mexico, Colombia, France, and more are coming. HQ in Madrid with a second big presence in Barcelona and some remote workers in the UK, The Netherlands, and the USA. And finally, some of our great clients are Amazon, XPOLogistics, Cabify, Santander, Just Eat, etc.
Join us in one of the hottest startups in Spain, breaking into a new market worldwide, learning and contributing with your expertise in innovating the entire job market!
We use Airflow as our ETL, a very powerful Python framework that allows us to break down any complex problem/process into smaller ones. Each component can be implemented directly in Python, or in any other language since they could be docker images executed in ECS instances.
The main priority is a senior profile with extremely solid Python skills, previous experience with Data/ETL is nice-to-have but it is not mandatory.
Overall, we can resume the main responsibilities as follow:
- Raise the level of our Python good practices, code structure, cleaning, and testing.
- Design and implement high-quality and performant Python code within our powerful ETL.
- Automate external integrations with our Data Lake and Data Warehouse.
- Automate complex solutions that might require to train, build, and deploy a series of Machine Learning algorithms (no previous experience with ML required).
In addition, we are looking for a senior profile that could be interested in:
- Mentor mid- and junior- Python Engineer: given the great impact of our ETL on the entire company, we want to grow a team dedicated to it, capable of improving, maintaining, and boosting it even further.
- Sharing her/his knowledge inside and outside Jobandtalent, raising the quality standards of the entire team with the aim of growing together.
- Contributing to Open Source projects: we are using different Open Source frameworks and libraries, and one of our wishes would be to contribute to some of those projects, dedicating part of our time when possible.
Requirements and Skills
- Bachelor’s degree in Math, Engineering, Stats, or Quantitative field.
- 4+ years of proven experience programming in Python with production-ready code (not scripting code).
- Extremely skilled programmer (e.g. , unittest , production/staging experience).
- Experience with
- Different kinds of standard databases (e.g. , RDBMS, NoSQL).
- Container development with Docker or Kubernetes.
- The leadership of projects, services, or products.
- Excellent verbal and written communication skills; ability to communicate effectively with different levels of management, as well as the business and technical communities.
- (Nice to have) Previous experience with
- ETL, data pre-processing, or data analysis.
- Supervising junior and mid-level developers.
- Experience with Big-data frameworks and OLAP DBs
- Stream processing framework (e.g. , RabbitMQ, Kafka, Spark, Flink).
- A valid work permit to be employed in Spain.
- Fluent in English is a must.
Examples of Projects and Responsibilities
- You receive a Jupyter Notebook made by one of our Data Analysts which contained a prototype of a data processing pipeline that generate very important data for our stakeholder. The Data Analyst has previously confirmed the data is corrected, but the code is definitely not ready for production. You need to understand it, define which is the best way to automate it (Airflow Python DAG, using Spark, external tools, etc.), and start the implementation by yourself or together with other team members.
- The previous example works in a similar way for Data Scientists when they come requesting help with automation about how to train a Machine Learning model that we want to re-evaluate on a weekly basis. Ideally, we want the results of the evaluation to be automatically computed and shared via a Slack channel, so we can quickly review them and decide if the model can be moved to production.
- We are expecting you to raise the quality bar of our code: helping your team members understanding which is the best way to structure the code for a specific problem, defining better protocols, introducing metrics when they are missing, improve the code performance, etc.
Offer in Short
- Great ownership of the projects with a direct impact on the product.
- Salary in the 65-75k range (wide range, depending on experience).
- Transparent Equity package.
- Discount on Health insurance.
- A yearly budget for Conferences/meetup/self-learning.
- Working in an international and multidisciplinary team
You will be working in the Data Science team (read the last blog posts), together with Full-stack Engineers, Data Scientists, and Data Analysts.
How the Jobandtalent Engineering team is structured?
- What we call “Product Team” includes the Product & Design, Tech, and Data.
- Product & Design ⇒ all the Product Owners and Designers that belong to the different Product Features teams (cross-functional teams that take care of some specific parts of the product).
- Tech ⇒ includes all the main engineering profiles and guilds, such as Backend, Frontend, Android, iOS, and Platform. The majority of them are embedded into the Product Features teams to create our extremely independent and cross-functional teams.
- Data ⇒ includes three different teams, namely Data Analytics, Data Science, and Data Engineering.
How the Jobandtalent Data team is organized?
- The Data team includes three different teams that work very closely together.
- The Data Engineering one owns the data pipelines and processes that generate the data needed by our Data Warehouse, and thus by the entire Jobandtalent.
- The Data Science team, which owns all the models used by the Product teams
- The Data Analytics team is composed of two types of profiles: (i) Product Data Analysts, which work embedded in the Product Team following a Hub’n’Spoke approach, and (ii) Data Partners, which are working closely with other departments such as Finance, Sales, Marketing, etc.
How do we organize the tasks and the planning?
- We are very data-driven, we have clear KPIs for each project, team, and goal.
- We define strong OKRs for every quarter, we work with bi-weekly Sprint (each team has its own JIRA Project), and a Data PM is leading the delivery processes and metrics.