Artificial Intelligence
Open Source Contributions
Hugging Face Candle
Contributed to Candle, a minimalist ML framework for Rust with a focus on performance and ease of use. My contributions include:
- implemeted qwen3.rs intial version
- Fixed docs
Transformers Library
Active contributor to the state-of-the-art NLP library with over 100k stars. My contributions focus on:
- Translation
Hugging Face Hub
Enhanced the Hub client library to improve developer experience:
- Fixed Dynamic commit size problem
Published Models & Datasets
I have published various models and datasets to the Hugging Face Hub, contributing to the open-source ML community. My published work includes:
Fine-tuned Language Models
- Korean Code Reviewer (fine-tuned QWEN2.5 Coder)
- Code vulnerability detector
Curated Datasets
- Korean Code review dataset

Community Involvement & Leadership
Pseudo Lab - HuggingFace Beyond First PR
Participated in Pseudo Lab's specialized program for contributing to Hugging Face ecosystem:
- Advanced contributor training for Hugging Face libraries
OSSCA 2025 Continuous Contributor
Ongoing participation in Open Source Software Campus Activist program:
- Regular contributions to major open-source AI/ML projects
GDGoC HUFS - MLOps Study Lead
As a member of Google Developer Groups on Campus at HUFS, I led the MLOps study group where we:
- Organized weekly workshops on ML deployment best practices
- Taught containerization with Docker for ML models
- Implemented CI/CD pipelines for ML projects using GitHub Actions
- Explored model monitoring and versioning strategies
- Hands-on sessions with tools like MLflow, DVC, and Weights & Biases
Skills & Technologies
Deep Learning Frameworks
- PyTorch & PyTorch Lightning
- TensorFlow & Keras
- JAX & Flax
- Hugging Face Transformers
MLOps & Deployment
- Docker & Kubernetes
- MLflow & Weights & Biases
- Model serving with FastAPI
- Cloud platforms (AWS, GCP)
Achievements & Recognition
Hugging Face Community Contributor
Recognized for consistent contributions to the Hugging Face ecosystem
MLOps Study Group Excellence Award
Best study group leader at GDGoC HUFS 2024
Open Source Impact
Models and datasets used by 1000+ developers worldwide
Research Interests
Natural Language Processing
Large language models, multilingual NLP, and efficient fine-tuning techniques
Model Optimization
Quantization, pruning, and knowledge distillation for edge deployment
Federated Learning
Privacy-preserving distributed training and secure aggregation