Skip to content
View enesgulerml's full-sized avatar

Block or report enesgulerml

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
enesgulerml/README.md

Hi there, I'm Enes!

I'm an MLOps Engineer focused on designing scalable, cloud-native inference architectures and automated CI/CD pipelines. I specialize in transforming experimental models into production-ready microservices using Kubernetes, Docker, and AWS.


What I'm working on

I am currently architecting robust ML systems with a focus on:

  • Orchestration: Managing containerized applications with Kubernetes.
  • MLOps: End-to-end pipeline automation & monitoring.
  • Microservices: Decoupling monolithic ML code into scalable FastAPI services.
  • Infrastructure: Configuring AWS (EC2, VPC, IAM, ECR) and Linux environments for high availability.

Tech Stack

Languages & Frameworks

MLOps & Cloud

Data & Databases

Pinned Loading

  1. hm-fashion-recommender hm-fashion-recommender Public

    Scalable Fashion Recommendation Engine using FastAPI & Qdrant. Features low-latency vector search, Redis caching strategy, and modular microservices architecture utilizing Docker & CI/CD.

    Python 4

  2. nyc-taxi-mlops nyc-taxi-mlops Public

    Achieved 85x inference speedup (~3ms latency) using Redis & ONNX. Production-grade MLOps pipeline featuring secure Microservices (FastAPI), Multi-Stage Docker optimization, and Kubernetes deployment.

    Python 2

  3. crm-mlops crm-mlops Public

    Python