Machine Learning Engineer
KPMG
Glasgow, GB
3d ago

Job description

Roles & Responsibilities

  • Client Project Delivery : A Machine Learning Engineer would typically manage a team of Data Scientists and Data Engineers on specific engagements or workstreams, focused on productionising existing AI solutions designed for processing large data sets (e.
  • g., using a Hadoop framework) that accelerate our clients’ digital journeys. This could include refactoring and optimising existing solution, automating & productionising AI pipelines, and deploying solution into the cloud with best practices, scalability, and auditability in mind.

  • Business Development : Work collaboratively with our business teams and our clients to show the art of the possible and to assess possible value and feasibility of applying data science to help solve specific business problems.
  • This could include demoing to prospective clients, developing data strategies, leading feasibility studies, and supporting RFP responses.

  • Asset Development : Build data science assets (aka accelerators’), in line with our UK and / or global strategy, to ensure we have the platforms and core assets in place to meet market demand.
  • This could also include supporting our continuous improvement process around our own design and development processes e.

    g. about how we ensure the high quality that our clients require in an efficient manner.

  • Collaboration : Liaise with our advanced data engineering and cloud engineering team for architecture design and model deployment to jointly build solutions and products that will interoperate seamlessly with other elements of the broader information architecture
  • People : As a fast growing highly specialised team, you will be involved in the running and growing of our team, e.g., through involvement in hiring and coaching colleagues, helping with knowledge management, organising team meetings or other events.
  • The Person

  • Well versed at designing and building big data pipelines with machine learning workloads which are repeatable and scalable for extremely large datasets.
  • Experience with : Deploying latest NLP techniques such as Transformers in production, with awareness of the challenges.
  • Creating performance metrics and tracking processes to measure the effectiveness of Data Science solutionsConceptualising necessary data governance models to support the technical solution and assure the veracity of the dataWorking collaboratively with other members of the Data Science, Data Engineering and Information Architecture teams to innovate and create compelling data-centric stories and experiencesProficient with programming languages in Big Data platforms, like Python, R, Scala Knowledge on at least one of the mainstream deep learning frameworks such as PyTorch, TensorFlow Understanding software development best practicesGCP platform : Dataflow, Composer, BigQuery or similar techniques in other cloud platformsMLOps MLFlow, Kubeflow, BentoML or similarProductionising machine learning pipelines with Apache Beam and Apache Airflow

  • Track record in staying conversant in new analytic technologies, architectures, and languages where necessary for storing, processing, and manipulating this type of data
  • Demonstrated Data Science consultancy skills, e.g. running hypotheses workshops, mentoring more junior team members, preparing reports and presenting data science results.
  • Skilled to communicate with a variety of stakeholders in the organization
  • Planning and organisation skills so as to work with a high-performance team, handle demanding clients and multitask effectively and in an agile way
  • Team management experience preferred
  • Qualifications

  • Strong experience in AI, data science, data engineering and / or other technology related capabilities in one or multiple industries.
  • Experience in Financial Service sector, in particular ESG analytics and risk management, is preferred.

  • BSc (ideally MSc or PhD) in Computer Science, Statistics, Engineering or similar technical field
  • A combination of one or more of the following : Proficient with programming languages like Python, R, Scala, Proficient with Git, Linux, DockerSoftware Engineering best practices and Object-Oriented ProgrammingSkills in big data technologies like Hadoop, HDFS, Spark, Elasticsearch, Apache Beam, Apache AirflowSQL and NoSQL databasesCloud certification(s) desired such as : GCP Machine Learning EngineeringGCP Data EngineeringAzure Data ScientistAzure AI EngineeringAzure Data EngineeringAWS Machine Learning Specialty
  • Report this job
    checkmark

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form