Text-to-3D works well when you can describe a thing in one sentence. For everything else — a sketch, a reference photo, an existing physical prop you want to remix — image-to-3D is faster.
This guide uses the same 3D creative actor as the text-to-STL guide, with the `generate_3d_from_image` tool instead of `generate_3d_from_text`. The output is a watertight STL, optionally with auto-generated SLA supports for the resin printer you specify.
Step by step
- 01
Pick the input image
Single object on a plain background works best — clutter confuses the reconstructor. A clean product shot, a sketch on paper, or a 3-quarter-view reference photo all reconstruct cleanly. Skip group photos and busy scenes.
- 02
Pass image_url and a target size
Use the `generate_3d_from_image` tool with `image_url` (must be a publicly fetchable URL) and `target_size_mm` (the reconstructor will scale the mesh to that real-world size). Choose a quality tier — Standard ($0.15) is a sensible default.
- 03
Validate against your printer
After the mesh comes back, run `validate_for_printer` against your specific printer profile to flag thin walls, unsupportable overhangs, or mesh issues that the slicer would catch later. Then run `generate_sla_supports` to add a tree-style support structure tuned to that printer.
Example prompts
Copy, click, tweak — the CTA opens the terminal with the prompt pre-loaded.
image_url: a clean black-and-white sketch of a stylised raven on a perch. target_size_mm: 80. printer: elegoo_saturn_3_ultra. quality: standard. Try →image_url: 3-quarter-view photo of a vintage typewriter. target_size_mm: 50. printer: anycubic_photon_mono_m5s. quality: high. (For miniature display, not a functional reproduction.) Try →image_url: 2D concept of a low-poly mountain dragon. target_size_mm: 100. printer: phrozen_sonic_mini_8k. quality: standard. Notes: low-poly faceted style. Try →API call
Standard REST. Bearer token, JSON body, URL response. Works in any HTTP client, n8n, Make, Zapier, or MCP agent.
curl -X POST https://api.42rows.com/v1/3d-creative \
-H "Authorization: Bearer sk_..." \
-H "Content-Type: application/json" \
-d '{
"tool": "generate_3d_from_image",
"image_url": "https://example.com/raven-sketch.png",
"target_size_mm": 80,
"printer": "elegoo_saturn_3_ultra",
"quality": "standard",
"format": "stl"
}'Pricing
Same tier matrix as text-to-3D: Lite $0.08 · Standard $0.15 · High $0.25 · Ultra $2.00. Imagen-assisted ($0.35) and multi-view ($0.75) variants available for harder cases.
FAQ
What kind of image works best?
A single object, plain (white or solid colour) background, with clear silhouette. Photographs work, sketches work, concept art works. Group photos or busy scenes degrade reconstruction quality.
Does the API need just one view, or multiple?
One image is enough for the standard pipeline. For harder geometries, the `generate_3d_multiview` tool ($0.75) takes 4 views (front, back, left, right) and reconstructs more reliably.
Will the result match the photo exactly?
The reconstructor is good at silhouette and large-scale shape, less reliable on fine surface detail. Treat output as a remixable starting point — fine details may need touch-up in Blender or your CAD tool.
Can I upload a private image?
Pass any publicly fetchable URL — including a presigned S3/R2 URL with a short expiry if you do not want it permanently public. The actor fetches the image once at request time.
How long does image-to-3D take vs text-to-3D?
Comparable: Standard tier is ~30 seconds for both. Image-to-3D adds a few seconds for the image fetch and preprocessing.
Ship it
Use the first example prompt as a starter — the button opens the public terminal with it pre-filled.