Job Title: Data Engineer

Base Salary: $137,000 – $160,000 per annum. 40 hours per week; M-F.

Job Description:

  1. Maintain, optimize, and expand ETL data pipeline infrastructure (Python and Scala based) for processing terrabyte-sized datasets using big data processing technologies like Apache Spark.
  2. Provision dynamic server infrastructure in AWS cloud using technologies including Kubernetes, Docker, AWS EC2, and AWS Batch, and AWS Lambda.
  3. Develop and maintain data lake solutions for the data warehouse and support rich visualization using D3 and custom JavaScript charts for the Analytics platform.
  4. Architect and support a scalable, robust, and secure web application backend built using Python and NodeJS.
  5. Create and optimize a secure client-facing REST API with enterprise-grade SLA.
  6. Implement and Deploy code using CI/CD pipelines with Terraform, Ansible, Docker, Git, and Shell Scripting.
  7. Collaborate with the Data Science teams to implement data products and automate training of AI/ML models using Scikit-learn, Pandas, Numpy, and PyTorch.
  8. Build analytic tools that utilize the data pipeline to provide actionable insights into customer analytics, operational efficiency, and other key business performance metrics.
  9. Implement and adhere to industry security processes, standards, and best practices to ensure SOC2 compliance.
  10. Possess real-world experience and working knowledge of various data stores including Postgres, DynamoDB, Redis, and Elasticsearch.

May Telecommute.

Minimum Requirements:

Bachelor’s (or foreign educ. equiv.) Degree in Computer Science, Computer Engineering, Software Engineering or a closely related field plus three (3) years’ experience in the job offered or related.

Special Skill Requirements: 

  1. Use of Python, Scala, JavaScript and, Shell Scripting
  2. Experience with AWS, AWS Lambda, AWS EC2, AWS Batch, Kubernetes, and Docker
  3. Working with Postgres, DynamoDB, Pandas and, Nump
  4. Experience with Data Pipelines, CI/CD pipelines, and REST API
  5. Working with terrabyte-sized datasets
  6. Collaborating with Data Science teams

To apply for this role, please send resume to careers@pinpoint.ai