§03 blogs / archive
writing
Technical posts written during my time at Research Commons — neural network fundamentals, PyTorch internals, edge-side function calling, and distributed training on Kubernetes. Each one links out to the canonical post on the Research Commons engineering blog.
- §01 2026.03.19 research commons »
Single Node Training with GKE and Ray
End-to-end Ray workflow for a distributed training job on Kubernetes — cluster setup, the KubeRay operator, Ray job manifests, the training script, and the monitoring dashboard.
read post → external - §02 2026.01.29 research commons »
Functional-Gemma: What is it, and how to fine-tune it to do better?
A walkthrough of Google's 270M-parameter FunctionGemma for edge function calling — special token format, the full calling lifecycle, docstring conventions, and fine-tuning results on BFCL.
read post → external - §03 2026.01.11 research commons »
Intro to Neural Networks
Neural network fundamentals — core components, the training loop via PyTorch, tinygrad's minimalist take, and cppgrad, a C++ neural-network framework being built from the ground up.
read post → external - §04 2025.12.18 research commons »
PyTorch Overview — From Research to Prod
Opening chapter of a PyTorch series — the library's philosophy, its three core components, real-world use cases where PyTorch shines, and setting up your environment to start coding.
read post → external