Senior iOS / On-Device ML Engineer — Osaka, Japan.
I take ML/CV/LLM research and ship it on phones, tablets, and AR devices — fast. I maintain CoreML-Models ★ 1,745, the de-facto iOS CoreML model zoo. 134 OSS repos covering YOLO / SAM / LaMa / on-device LLM / ARKit. 10 apps on the App Store under my own developer account. 7+ years iOS / on-device ML.
How ARKit's sceneDepth confidence mask + a per-anchor exponential-decay tracker took my retail-vision app from 60% to 88% recognition during motion. With code.
2025 — present · Pebble Inc. (Founder)
94k-line production iOS retail-vision app — IDVision
Solo-lead. ARKit + LiDAR + YOLOv8 + on-device OCR + multilingual voice (Romanian/English) + multi-device backend sync. Sub-200ms recognition on iPhone 15 Pro. 70+ TestFlight builds. Postgres + AWS + GitHub Actions + Sentry.
2024 — 2025 · Ultralytics
YOLOv5 / YOLOv8 mobile deployment team
CoreML / TFLite / ONNX conversion, ANE optimization, mobile reference apps.
2022 — 2024 · Agencia Inc., Tokyo
ML Engineer
Production ML pipelines and on-device inference for camera-based products.
2019 — present · Freelance iOS / CV
Independent
Shipped 10+ apps under own developer account; maintain CoreML-Models & 134 OSS repos.
Real-world 3D measurement using iPhone Pro / iPad Pro LiDAR. Tap two points → get a distance you can trust.
On-device gallery of dozens of CoreML models you can run live on your phone — segmentation, detection, pose, style transfer.
On-device image upscaler. Runs ESRGAN-style super-resolution networks fully on the Neural Engine — no server.
On-device face inpainting / restoration. Repairs missing or masked regions using a CoreML-converted GAN.
One-image animation. Drive a still portrait with motion from another video — on-device.
Text-to-image + inpaint on-device. Stable Diffusion variants ported and optimised to run on iPhone.
Depth-aware bokeh from a single image. Foreground / background separation through on-device monocular depth.
OCR-powered quick-capture memo. Photograph anything → searchable text on-device, private.
Visual task manager with iCloud sync.
Long-exposure + light-painting style photography on iOS.
De-facto iOS Core ML model zoo. Ports of YOLO, SAM, LaMa, Tesseract OCR, on-device LLM bindings, Stable Diffusion variants, segmentation, depth, pose, style transfer.
github.com/john-rocky/CoreML-Models →YOLOv5/v8 · SAM · LaMa · Tesseract · MobileNet · DepthAnything · Style-transfer · Pose · OCR · on-device LLM (Llama.cpp / MLX) · Stable-Diffusion · Real-ESRGAN · NeRF · Gaussian-Splat.
github.com/john-rocky?tab=repositories →Where on-device ML actually matters: Apple Silicon / ANE optimisation · real-time computer vision · AR / LiDAR · on-device LLM · camera + imaging pipelines · visionOS.
Where: Remote (JST) or relocate with visa to San Francisco / NYC / Seattle / Tokyo / London.
Form: Full-time, or contract (2 weeks – 3 months, $150–250/hr).
Start: Immediately.
I reply within a day. Open to a quick intro chat anytime.
✉ Email rockyshikoku@gmail.com