Python Developer - Web Scraping

Crawling & Scraping Engineer for an american startup specialized in the fashion industry

  • €36K - €46K
  • startup (moda)
  • Madrid
Python Scrapy Django PostgreSQL Docker AWS

Condiciones profesionales

Lo primero de todo, comenzamos con las condiciones de trabajo que te ofrece la empresa. Así, si no te cuadra en lo más básico, no pierdes más tiempo en seguir leyendo.

  • Place to work in the Avenida de América (Madrid) area
  • Permanent contract
  • 36-46K salary (Negotiable)
  • Working hours are flexible
  • Fully remote position until it's safe to go back to their office. After situation Covid you can work remotely when needed.
  • A challenging and fun project to work and grow with, with the latest technologies and best practices. All in a friendly, relaxed and positive environment.
  • Fixed yearly training budget to spend on english classes, courses, books or conferences.
  • Your brand new laptop with OS of your choice (They recommend MacOSX or any flavor of Linux).


Si las condiciones profesionales, te han parecido atractivas y están en la línea de tus motivaciones profesionales, ¡sigue leyendo!.

Producto o servicio

They are a startup founded 6 years ago with offices in New York and Madrid.

Their product is strategic analytics SaaS platform that helps fashion retailers and brands with critical in-and-next season decisions.

Equipo

Madrid is the home for their core technical team of around 20 people.

It's an open, diverse and inclusive team of very skilled and talented individuals that are happy to collaborate, share knowledge and enjoy building great software together. 

Their leitmotiv is "We love data".

They are looking forward to welcoming additional members for this team and I’m helping them to achieve the goal! 

Funciones y responsabilidades

Tu día a día como profesional:

As a web scraping focused engineer in the Data Engineering team, you will be responsible for ensuring that our data is always good quality, fresh and available by:

  • Maintaining their current spiders up to date.
  • Maintain and enhance their custom scraping. framework based on scrapy.
  • Coordinate a team of "spidermen" to keep their spiders up to date.
  • Develop new spiders to continuously expand their data feed catalog.
  • Develop other data input feeds based on APIs and data files.


Competencias profesionales

Se le llaman Soft Skills o Hard Skills. No vamos a entrar en terminología. Lo importante es que sepas qué habilidades y experiencia se espera de ti.

Competencias necesarias para el puesto:

In order to maintain and create new pieces of the data pipeline, you will need to bring your skills and knowledge around these technologies:

  • Python, as it is the base of their tech stack at many levels.
  • Scrapy, you know why. ;) 
  • Good understanding of how the web works: requests, responses, user agents, proxies, HTTP protocol, robots.txt.
  • Linux shell command line, don't need to be a sysadmin but they expect you to be able to navigate yourself in your local machine and in a server box.
  • SQL, PostgreSQL, they work with data, you produce structured data.
  • English: half of the company doesn't speak Spanish and your job involves some written and spoken. communication with people in other countries and timezones (and with different cultures by the way). 


Competencias deseadas para el puesto:

You will learn on the job many of the other tech pieces they use. Of course it will be easier if you are already familiar with any of them: Docker, AWS, Node.js, Pupeteer, Jira, MongoDB. 

Síguenos también en las redes sociales