X Y Z 50cm 8cm
In Development

Scan Any Object.
Any Phone.

AI-powered 3D object recognition and depth reconstruction for Android. No LiDAR. No time-of-flight sensor. No special hardware — just the camera already in your pocket.

~300ms Scan Duration
Any Android Camera2 API
STL / OBJ Export Formats
On-Server AI Processing

Three steps to a 3D model

ObjectRecognize uses Depth From Focus — a classical computer vision technique reimagined for mobile, with AI-accelerated reconstruction on the server.

Step 01

Point & Auto-Trigger

ARCore spatial awareness detects stable surfaces and objects. When confidence exceeds threshold, a focus-sweep miniscan triggers automatically — no tap required.

Step 02
f/15

Focus Sweep (~300ms)

The camera sweeps focus from macro to infinity across 15 log-spaced steps at 60fps. A colour-jitter overlay marks sharpest pixels per frame — the phone captures a focal stack video in under a third of a second.

Step 03
.STL

Server Reconstructs 3D Model

The clip uploads to our server where a Laplacian sharpness pipeline builds a dense depth map, Open3D stitches multi-angle captures, and Poisson reconstruction outputs a clean STL or OBJ in seconds.

What's being built

Six distinct systems, all running on any Android phone without dedicated depth hardware. Designed to process on our servers — no app-side compute, no model downloads.

8cm
Core Tech

Depth From Focus Scanning

A programmatic focus sweep (macro→∞ in ~300ms) creates a focal stack. Laplacian sharpness analysis extracts a dense per-pixel depth map — without a single depth sensor.

9.2 ✓ layers ✓ surface
AI Vision

AI Print Quality Assessment

Photo a completed or in-progress 3D print. A vision model (minicpm-v) analyses layer shifts, stringing, warping, and under-extrusion — returning a structured quality score in seconds.

AUTO-TRIGGER
ARCore

Spatial Auto-Trigger

ARCore's plane detection and feature point density monitoring auto-starts each miniscan when a stable, scannable object enters the frame. No tapping, no manual timing required.

ICP stitch · ARCore poses
Reconstruction

Multi-Angle Stitching

ARCore exports a camera pose for each miniscan. The server transforms each depth map to world coordinates and refines alignment with Iterative Closest Point for a seamless full-object model.

STL OBJ PLY
Output

Instant STL / OBJ Export

Poisson surface reconstruction produces a watertight mesh. Export to STL for slicing and printing, OBJ with textures for modelling, or PLY point cloud for further processing — from a single scan session.

NO LiDAR REQUIRED
Universal

Any Android or iPhone

Camera2 API manual focus control is available on virtually all modern Android devices. No time-of-flight sensor, no LiDAR, no depth module — just any phone with a camera that supports Camera2 Level 3.

Depth Without a
Depth Sensor

Traditional mobile 3D scanning relies on LiDAR or time-of-flight sensors only found in premium flagship phones. ObjectRecognize takes a different approach: Depth From Focus (DFF) extracts geometry from optics alone.

As the camera sweeps focus from close-range to infinity, every surface in the scene passes through peak sharpness at exactly one focus distance. A Laplacian sharpness metric pinpoints that moment for every pixel — producing a dense depth map without a single depth photon.

Camera2 API ARCore OpenCV Open3D Laplacian LoG ICP Stitching Poisson Mesh minicpm-v
~300ms Focus sweep duration at 60fps
15–20 Log-spaced focus steps per scan
87% Of active Android devices supported via ARCore Depth API
0 Special hardware sensors required

Be first when we launch

ObjectRecognize is in active development. Enter your email to get notified when early access opens — no spam, one email.