https://share.google/aimode/i9pknZTOSryL3T6M4

about openflexure microscope
The OpenFlexure Microscope is
a 3D-printable, open-source automated microscope that uses a Raspberry
Pi computer to provide high-quality digital imaging capabilities at a
fraction of the cost of commercial lab equipment. Its primary goal is
to improve access to diagnostic and research-grade microscopy,
particularly in low-resource settings.
Core technology
The OpenFlexure Microscope is distinguished by its innovative use of a
3D-printed flexure mechanism for precise movement. This design,
refined over years of development, provides several key benefits:

Precision: The monolithic flexure stage allows for fine, sub-micron
positioning of samples and optics without relying on expensive,
tightly-toleranced machined parts or bearings.
Cost-effectiveness: The use of 3D-printed parts and readily available,
off-the-shelf components, such as a Raspberry Pi camera, drastically
lowers the cost of the device. According to IO Rodeo, a
high-resolution version of the microscope can be built for about $305
in parts, while a low-cost version is approximately $185.
Local manufacturing and maintenance: Because the primary structural
components are 3D-printed, the microscope can be manufactured and
repaired locally. This eliminates dependence on fragile international
supply chains and supports local economies, especially in developing
nations.

Features and capabilities
Despite its low cost, the OpenFlexure Microscope offers features
comparable to more expensive commercial instruments.

Imaging modes: The modular design allows for various imaging
techniques by swapping out the optics module. Available modes include:

Bright-field (trans- and epi-illumination)
Polarization-contrast imaging
Epifluorescence imaging

Automation: The Raspberry Pi enables automated functions controlled by
open-source software:

Autofocus: Two image-based algorithms automatically bring samples into focus.
Automated scanning: The system can perform large, automated tile scans
of samples to build high-resolution composite images.
Time-lapse imaging: Samples can be imaged automatically over long
periods to capture changes.

Software: The open-source software, including the graphical user
interface (OpenFlexure Connect), is designed for ease of use by
non-programmers, with full functionality accessible via a standard
network.

Applications and impact
The OpenFlexure Microscope is used for a wide range of applications in
both research and global health.

Global health: It has been used in field trials for diagnosing
diseases like malaria in Tanzania and cancer in oesophageal biopsies.
Its low cost and ease of maintenance are particularly valuable in
areas with limited resources.
Education: The ability to build and customize the microscope makes it
a valuable tool for hands-on teaching and STEM education.
Research: Researchers have adapted the open-source platform for a
variety of specialized applications, including super-resolution
microscopy and optical sectioning.

Getting involved
As an open-source project, anyone can build or contribute to the
OpenFlexure Microscope.

Build your own: The official OpenFlexure website provides
comprehensive instructions and lists of components for building either
a high-resolution or low-cost version of the microscope.
Join the community: The project is supported by a global community of
users, developers, and researchers who share insights and contribute
to the ongoing development. You can find links to the project's
documentation, community forums, and code repositories on its website.

can you list other similar projects with a brief comparison? are there
any similar tomographic microscopes (is this tomographic)? can you
list the non-printed parts for both high and low resolution?
The OpenFlexure Microscope is not a tomographic microscope, but it can
be adapted for tomographic imaging
. The standard OpenFlexure uses a single camera to capture
two-dimensional (2D) images, and while it provides precise 3D motion,
it does not build 3D models of samples in the manner of a tomographic
system. Some researchers have adapted OpenFlexure stages to create
low-cost tomographic systems, such as OptiJ, an open-source optical
projection tomography system.
Comparison with similar open-source projects
Feature OpenFlexure MicroscopeFoldscopeuScope
Primary focusHigh-precision, automated, research-grade microscopy at a
low cost.Extreme affordability and simplicity for education and
citizen science.Manual-focus digital microscopy for simple, low-cost
imaging.
Motion controlAutomated, high-precision 3-axis motion using a
3D-printed flexure stage and stepper motors.Manual, with motion being
controlled by sliding a mounted sample.Manual focus control.
OpticsModular design allows for high-resolution objectives, but the
low-cost version uses a Raspberry Pi camera lens.Very simple, low-cost
optics, often using a small glass sphere as a lens.Typically uses a
simple webcam or similar digital camera lens.
CostCosts vary depending on the version and modules, but can range
from under $200 for a low-cost version to higher for a fully-equipped
setup.Extremely low cost (a few dollars), designed to be highly
accessible and widely distributed.Designed for low cost, using a
manual stage and off-the-shelf camera parts.
ApplicationsMedical diagnostics, educational demonstrations, and
advanced research like super-resolution imaging.Outreach, educational
workshops, and basic field observation.Educational purposes and simple
digital image capture.
Non-printed parts for OpenFlexure microscope
Common parts for both high-resolution and low-cost versions

Electronics:

Raspberry Pi (any version capable of running the software)
Raspberry Pi Camera Board v2
 (8-megapixel version with lens removal tool)
Sangaboard v0.5 or newer (motor control board, pre-programmed)
3x 28BYJ-48 5V stepper motors

Illumination:

Illumination kit with a white LED, power cable, and diffusers
Condenser lens

Hardware and tools:

A set of M3 and M2 screws, nuts, and washers
Ball-end Allen key
MicroSD card pre-loaded with the OpenFlexure OS

High-resolution version

Optics:

40x objective lens
12.7mm achromatic lens

Optics adapter:

High-resolution optics 3D-printed parts (this replaces the low-cost
optics parts)

Low-cost version

Optics:

No additional optics are needed, as this version uses the lens from
the Raspberry Pi camera board.



list all open microscope projects that are currently active, as well
as all open tomographic projects active or not
A broad range of open-source microscope and tomographic projects
exist, with many active projects and some that are no longer
maintained. These projects cover everything from simple, educational
tools to advanced, high-resolution research systems, often leveraging
3D printing and off-the-shelf components.

Active open-source microscope projects
This is a partial list of notable projects, as many are developed by
individual labs and shared on platforms like GitHub.
Modular and 3D-printable platforms

OpenFlexure: A modular, automated, and 3D-printable microscope with
flexure-based stages for precise movement. It has a robust global
community and is used in both research and field settings for
applications like diagnosing infectious diseases.
UC2 [You.See.Too.]: A versatile, 3D-printed modular framework for
building interactive optical systems. It can be configured for various
microscopy techniques, including fluorescence and structured
illumination microscopy (SIM).
M4All: A project for a modular, 3D-printed, high-end microscope with
multicolour detection and a Raspberry Pi-based readout system.
OpenFrame: A modular and low-cost hardware platform that provides a
basis for adapting various light microscopy techniques.

Educational and field-based projects

Foldscope: An origami-based, paper microscope that is extremely
affordable (under $1) and intended for universal access to microscopy.
It is used widely for education and citizen science.
PlanktonScope: A microscope designed for citizen scientists to image
sea plankton, developed by the Prakash lab.
Entomoscope: An open-source photomicroscope designed to assist in
biodiversity discovery.

Advanced microscopy techniques

MINI2P
: An open-source, two-photon miniature microscope for high-resolution
calcium imaging in freely moving mice.
MesoSPIM: A sophisticated, large-scale light-sheet microscopy system
for imaging large specimens.
K2 TIRF: An open-source, triple-colour total internal reflection
fluorescence (TIRF) microscope for live-cell and single-molecule
imaging.
easySTORM: A low-cost system for localization and TIRF microscopy.

Smartphone-based projects

Luduscope: An interactive, smartphone-based microscope designed for
life-science education.
MatchboxScope: A mini-microscope for under 10 Euros that uses an ESP32
microcontroller.

Open-source tomographic projects
Medical and X-ray computed tomography (CT)

Tomviz: A cross-platform, open-source application for processing,
visualizing, and analyzing 3D tomographic data, particularly from
electron microscopy and other sources.
OpenCT/OpenCT2: A project aimed at creating an inexpensive,
open-source desktop CT scanner for academic and educational use. It
leverages low-intensity radioisotope sources and image processing to
achieve results, but with much longer acquisition times than
commercial systems.
3D Slicer: A widely used, robust software platform for the
visualization and analysis of medical image data, including
reconstruction of CT and MRI scans into 3D models.
ASTRA Toolbox: A modular, open-source software suite for tomographic
reconstruction, with a strong focus on research. It supports various
algorithms and is accelerated with GPUs.
OpenMBIR-TEM: An open-source project for developing model-based
iterative reconstruction (MBIR) algorithms for electron tomography.

Optical tomographic projects

Open OPT: An older project for the design and implementation of a
custom-built optical projection tomography (OPT) system.
OptiJ: An open-source project for optical projection tomography, often
used in conjunction with Fiji software.
Field-Portable Lensfree Tomographic Microscope: A project that created
a compact, lens-free tomographic microscope by using multi-angle
illumination and holographic reconstruction.

Defunct or inactive open-source projects
Some projects, while still documented, may no longer be actively
maintained or developed. Their code and designs can still serve as
inspiration and a reference for new projects. Examples include:

Open SPIM: One of the earlier open-access platforms for Selective
Plane Illumination Microscopy (SPIM), or light-sheet microscopy.
µCube: An older framework for 3D-printable optomechanics from the
Haseloff lab in 2018.
WOSM (Warwick Open Source Microscope): An older project for a TIRF and
educational fluorescence microscope.
Tomocell: A GitHub repository for tomography that has not been updated
since December 2020.


only hardware, not software. try again? also how is the foldscope so
cheap? is there a robotic foldscope?
Active open hardware microscope projects
OpenFlexure

Optics: Modular system that supports various optical components, from
simple Raspberry Pi camera lenses for low-cost setups to standard,
high-magnification microscope objective lenses for high-resolution
models.
Mechanics: Primarily 3D-printed with a unique monolithic flexure stage
that provides sub-micron precision for X, Y, and Z axis movement. It
uses stepper motors and a motor control board for automated control.
Electronics: Requires a Raspberry Pi computer, a Sangaboard motor
controller, and stepper motors to drive the automated stage.
Illumination is provided by an LED board.

UC2 [You.See.Too.]

Optics: Highly modular and flexible, with optical components housed in
3D-printed blocks that can be rearranged. This allows for numerous
imaging techniques, including fluorescence and structured illumination
microscopy.
Mechanics: Centered around a 3D-printed, cube-based framework. It can
be built with manual focusing or automated stages driven by various
motors and controllers.
Electronics: Uses modular electronic components that interface via a
central controller, such as an ESP32 microcontroller with a CNC
shield. It can control a wide range of devices, including LEDs and
lasers.

PlanktonScope

Optics: Uses a simplified optical system with two reversed M12 lenses,
offering a lower-cost alternative to traditional objectives.
Mechanics: The stage is motorized with a delta design using stepper
motors and push rods. This creates a moving platform for precise
control over the sample's position.
Electronics: Powered by a Raspberry Pi and Pi Camera. It features a
custom BUS for power and data, and its LED illumination is arranged in
concentric rings for flexible lighting control.

The Foldscope's low cost
The Foldscope's ultra-low cost (around $1 in parts) is due to a
"frugal science" approach that re-engineers the instrument with
readily available and inexpensive materials.

Materials: It is made primarily from a single sheet of
water-resistant, durable paper that is folded like origami to form the
microscope's structure.
Minimalist design: The device simplifies complex optical and
mechanical components. Instead of an expensive glass objective lens,
it uses a tiny spherical glass or sapphire ball lens.
Manual operation: There are no electronic components for automation.
Focusing and panning are controlled manually by manipulating the paper
slides with your fingers.
Illumination: The light source is a simple, battery-powered LED
instead of an external power supply and lamp.

Robotic Foldscope
There is no "robotic" version of the Foldscope. The project's
philosophy prioritizes affordability and field science, not
automation. A robotic Foldscope would defeat the core purpose of the
design, which is to be a simple, manual, paper-based device. For
automated microscopy, projects like OpenFlexure or UC2 are far better
suited because they are built from the ground up to handle robotic
precision and control.
Open tomographic projects (hardware only)
Medical and X-ray

OpenCT/OpenCT2 (Inactive): This was a project to build a low-cost,
desktop X-ray CT scanner. Its hardware included an X-ray source (using
low-intensity radioisotope sources) and a sensor (e.g., a simple
camera).

Optical

OptiJ (Active): This system can be built by adapting standard optical
and mechanical components, including a rotating stage, a camera, and a
light source. The project is open source and designed to work with
software for tomographic reconstruction.
Field-Portable Lensfree Tomographic Microscope (Active): The hardware
for this project consists of a light source and a sensor. It uses
multi-angle illumination and computational methods to reconstruct 3D
images, eliminating the need for bulky lenses.
Open OPT (Inactive): This older project focused on the design and
implementation of a custom-built optical projection tomography (OPT)
system. The hardware typically involved a light source, a detector
(e.g., a camera), and a rotation stage.


which project would be most conducive to making a cheap robotic
microscope for non-medical purposes, such as mechanical watch motion
work, botany, microchip study — using the foldscope approach to a
cheap lens? would it be challenging?
Among the active open hardware microscope projects, the
OpenFlexure Microscope is the most conducive platform for creating a
cheap robotic microscope for non-medical applications. It is designed
for precise, automated movement, and the optical path is already
modular, allowing for substitution with a very low-cost,
Foldscope-style lens.
Suitability of OpenFlexure for a low-cost robotic build

Modular design: OpenFlexure has an interchangeable optics module that
can be swapped to accommodate different imaging modes. This feature
makes it easy to integrate a custom, low-cost lens system.
Built-in robotic stage: The OpenFlexure's core value is its precision,
automated movement, which is essential for robotic tasks. This
eliminates the need to build a motorized stage from scratch.
Low-cost version: The project's developers have already published a
low-cost version that uses the lens from a Raspberry Pi Camera,
demonstrating its adaptability to inexpensive optics.
Established software: The project's existing open-source software,
OpenFlexure Connect, already supports automated functions like
scanning and focusing, which are critical for a robotic system.

Challenges of a Foldscope-style lens
While technically possible, adapting a Foldscope-style ball lens to a
robotic microscope would be challenging, primarily due to the inherent
limitations of that type of lens.

Significant image distortion: Ball lenses cause high levels of
chromatic and geometric aberration, especially towards the edge of the
field of view. The camera sees a crisp image only through a very
small, central aperture.
Autofocus challenges: A robust autofocus system is difficult to
implement with a cheap, low-quality lens. The lack of distinct,
high-contrast features can cause the focus algorithm to struggle or
"hunt."
Limited field of view: The magnification, while decent for the cost,
does not provide a wide field of view, making observation of larger
subjects more difficult and requiring more robotic motion.

Comparison with alternative projects

UC2 (You.See.Too.): UC2 is also an excellent modular platform. It is
more focused on flexibility and rapid prototyping than the
high-precision movement of OpenFlexure. OpenFlexure's flexure-based
stage is arguably better suited for smooth, precise robotic motion
required for applications like examining watch movements.
Building from scratch: While possible, building a robotic microscope
from the ground up to match OpenFlexure's quality would likely require
significant development and engineering effort to achieve comparable
performance.
Commercial robotic microscopes: Off-the-shelf commercial options are
often prohibitively expensive. Open hardware projects offer a
compelling balance of cost and functionality.


is the openflexure suitable for non-automated use? are there any good
smaller or less-professional projects?
Is the OpenFlexure suitable for non-automated use?
Yes, the OpenFlexure microscope can be adapted for manual,
non-automated use. While its motorized version is the most common, its
modular design allows for modifications to create a simpler,
manual-motion version.

The OpenFlexure community has discussed and developed a manual body
and stand that can be built with thumbwheels instead of motors,
effectively converting the motorized flexure stage into a manual one.
Some versions have been built with a simple webcam, removing the
Raspberry Pi, motors, and motor controller, and using the manual
flexure stage.
However, its primary value and development focus is on automated
functionality, and simpler, pre-designed manual projects may be a
better fit for users who don't need or want automation. As one user on
the OpenFlexure forum put it, the manual stage can be "too slow" for
lower-magnification viewing.

Smaller or less-professional open hardware microscope projects
Several open hardware projects cater to educational, hobbyist, and
simple field-based applications, prioritizing low cost, ease of use,
and quick assembly.
Foldscope

Description: An extremely low-cost ($1 in parts), origami-based paper
microscope that simplifies all the traditional microscope components.
Pros: Highly portable, very affordable, and easy to assemble and use.
It has been used for educational outreach and basic field diagnostics.
Cons: Entirely manual, lacking any automation or electronic
components. The plastic or glass ball lens can have significant
optical aberrations, and focusing is done by bending the paper.

Luduscope

Description: An interactive, smartphone-based microscope built with
3D-printed parts. It is designed specifically for life-science
education.
Pros: Turns a smartphone into a digital microscope, offering an
affordable way for students to capture and share images and videos of
microscopic subjects.
Cons: Relies heavily on the camera of the user's smartphone, and its
magnification and quality are limited by the phone's lens and sensor.

MatchboxScope

Description: A mini-microscope project that fits inside a matchbox,
costing under 10 Euros. It uses an ESP32 microcontroller to capture
images.
Pros: Extremely small and portable, with a very low build cost.
Cons: Magnification and functionality are very limited compared to
more complex setups.

PlanktonScope

Description: A microscope designed for citizen scientists to image sea
plankton, using 3D-printed parts and a simplified optical system with
two reversed M12 lenses.
Pros: Built with a specific, fun use-case in mind, and includes some
motorized functionality and custom LED lighting.
Cons: While automated, it is designed for a specific purpose and
requires some expertise to build and operate.

Simple DIY setups

Description: Numerous online guides exist for making a basic
microscope using simple household materials. One YouTube project, for
instance, shows how to create a microscope using a clear plastic
bottle, a magnifying glass, and other basic supplies. Another uses a
laser pointer lens, a smartphone, and a small stand to create a
simple, high-magnification device.
Pros: Extremely inexpensive and can be made in minutes with common materials.
Cons: Performance is very low and offers minimal control over focus
and positioning.

Summary: Choosing a project
Feature OpenFlexure (Manual)FoldscopeLuduscope
MechanicsManual control of a high-precision 3D-printed flexure
stage.Hand-held, origami-based paper design.Smartphone-based stage
with manual or simple motorized control.
OpticsModular; can use standard high-quality objectives or low-cost
camera lenses.Simple ball lens with limited optical quality.Uses the
camera and lens of a smartphone.
ElectronicsDigital capture via Raspberry Pi and Pi camera (in manual
version).Simple LED and battery for illumination.Requires a smartphone
and potentially a microcontroller.
Best ForUsers who want the build quality of OpenFlexure without the
complexity of full automation.Education, field science, and
introductory hobbyist use.Students and hobbyists who want a cheap,
easy entry into digital microscopy.


is there any project convenient and precise for both manual and
automatic use? are any GPL?
Project with both manual and automatic control: PUMA Microscope
The Portable, Upgradeable, Modular, Affordable (PUMA) microscope is a
robust option designed for advanced microscopy that provides both
manual and automated control. The project's creator, Dr. Paul J.
Tadrous, aims for high precision and quality in a low-cost, 3D-printed
form.

Dual control: PUMA is designed for dual-use. The base can function as
a fully portable, manually-operated direct vision microscope. However,
it is also being developed with an automated, robotic XYZ stage to
integrate with a separate motor control system.
High-quality optics: PUMA stands out from many hobbyist projects by
designing high-quality features such as Abbe and Köhler illumination,
which provide professional-grade, even lighting.
Portability: The microscope is designed to be fully portable and can
even function without an external power source or computer.
Documentation and software: The project is well-documented with videos
and editable FreeCAD design files. The software for automation, called
PARDUS, is being developed to work with the robotic stage.

OpenFlexure dual control
While the OpenFlexure is primarily known for its automated
capabilities, it can be configured for manual use with some
adjustments. However, it was not originally designed for dual-mode
operation, and the manual control mechanisms are not as precise or
elegant as the motorized ones.

The manual version replaces the motors with manual thumbwheels, but
the precision control is still a key aspect of the design.
The OpenFlexure project prioritizes network automation and
software-driven control, so the manual operation is a secondary
configuration rather than a seamless feature.

GPL licensed projects
Several open hardware and software projects related to microscopy are
distributed under GPL licenses.
PUMA Microscope

Hardware license: The PUMA microscope hardware is licensed under GPL
v3, and all 3D printable files are available on its GitHub page.

Python-Microscope

Software license: The microscope package in Python, a software library
for controlling microscope devices and automating experiments, is
licensed under GPL-3.0-only. This is a software project, but it is
integral for controlling automated open hardware microscopes.

GNU General Public License

How it works: As a copyleft license, the GPL requires that any
derivative work (i.e., modifications or distributions of the hardware
or software) be released under the same license terms. This helps
ensure that the project and its derivatives remain open and
accessible.

Comparison of manual/automatic options
Feature PUMA MicroscopeOpenFlexure (Dual Mode)
Convenience of UseDesigned with dual modes in mind. Manual use is a
built-in feature for portability, while automation is an upgrade
path.Primarily designed for automation. Manual control is a less-used
configuration and is not as integrated or convenient.
Mechanical PrecisionUtilizes professional-quality optics and advanced
design features like Köhler illumination for high precision. The
automated stage is still in development.Employs a unique flexure stage
for high-precision, sub-micron automated movement, with manual options
available.
GPL LicenseLicensed under GPL v3 for both its hardware designs and
software.While some OpenFlexure hardware is under permissive licenses
like CERN OHL-S, some software components are GPL.
SuitabilityExcellent choice if you need a microscope that can be used
manually in the field and later upgraded to a precise, automated setup
for a workbench.Best if you primarily plan to use the microscope in an
automated, digital workflow. Manual use is an acceptable alternative
but not the main focus.


is it reasonable to perform DIY non-optical tomography or is it mostly
still for labs?
For most non-optical modalities, DIY tomography remains extremely
challenging and is primarily confined to advanced laboratory settings,
even for open-source projects
. While building a functioning non-optical imaging system is possible,
achieving high-quality 3D reconstructions comparable to commercial or
lab-grade equipment is very difficult and potentially hazardous. The
main obstacles are the complexity of the hardware, software, and the
physical principles involved.
Types of non-optical tomography for DIY projects
X-ray Computed Tomography (CT)

Feasibility: Highly challenging and dangerous for DIY enthusiasts.
Hardware: Requires a high-voltage power supply, an X-ray vacuum tube,
and a detector, which are expensive and potentially hazardous
components. Projects on Hackaday and Instructables have documented
building simple, low-power CT machines, but these are for educational
purposes and carry significant safety risks related to ionizing
radiation.
Safety: The primary barrier is safety. A home environment lacks the
lead-shielded rooms, calibrated exposure controls, and regulation
required to prevent dangerous radiation exposure to the operator and
others.
Outcome: Results are generally low-resolution and not medically or
scientifically useful, but can demonstrate the principles of CT
scanning.

Ultrasound Tomography

Feasibility: Possible for educational and basic experimentation, but
not for high-resolution imaging.
Hardware: DIY projects, often documented on sites like Hackster.io,
use affordable components like Arduino microcontrollers and simple
transducers (like those found in car parking sensors or paint
thickness gauges) to create a low-resolution scanner.
Functionality: These systems demonstrate the basic principles of
ultrasound imaging and create very low-resolution, blurry scans. They
are unsuitable for serious imaging applications but are useful for
learning about physics and electronics.
Phantoms: Most DIY ultrasound projects focus on creating phantoms
(training models made from gelatin, psyllium, or ballistic gel) rather
than building a high-performance imaging device.

Electrical Impedance Tomography (EIT)

Feasibility: More accessible for DIY than X-ray or high-end ultrasound.
Principle: EIT measures electrical impedance (resistance) inside a
volume by applying small currents and voltages from multiple points on
the surface.
Hardware: Can be built with readily available components, including
electrodes, a microcontroller, and some basic electronics.
Application: Useful for imaging soft tissue, though with lower
resolution than optical or X-ray methods. Open-source research
projects in this area are primarily lab-based but offer a pathway for
determined DIY builders.

Comparison to optical tomography
Optical tomography, while often performed in a lab setting, is
comparatively more reasonable for DIY projects due to its use of
visible light and less dangerous hardware. For example, the
Kitchen-Based Light Tomography (KBLT) toolkit is specifically designed
as a DIY educational tool that mimics the hardware setup of advanced
tomography using simple visible light sources and cameras. This makes
it an ideal entry point for understanding the concepts of tomography
without the inherent dangers of X-rays or the complexity of high-end
ultrasound.
Is DIY non-optical tomography reasonable?
Factor Non-Optical (e.g., X-ray, advanced ultrasound)Optical (e.g., KBLT)
HardwareComplex, specialized, and often expensive components required
(X-ray tubes, advanced transducers).Can be built with simple,
off-the-shelf parts (webcam, flashlight).
SafetySignificant risks involved, particularly with X-rays. Requires
extensive training and safety protocols.Minimal risk, as it uses
visible light. Safe for educational and home use.
ResolutionLow-cost projects yield low-quality, blurry results, far
from medical or research standards.Low-resolution, but sufficient for
demonstrating the principles of tomography.
Learning CurveSteep learning curve due to advanced physics, signal
processing, and reconstruction algorithms.Simpler and safer entry
point for understanding tomography principles.
Conclusion: For most hobbyists, building a non-optical tomographic
system is not a reasonable undertaking, mainly due to safety concerns
and hardware complexity. It is almost exclusively the domain of
research labs. A far more accessible and reasonable approach for a DIY
enthusiast would be to explore an open-source optical tomography
project like KBLT, which provides a safe and educational way to
understand the core principles.


what are the safest, easiest, or simplest approaches to imaging the
junctions of an old low-end microchip?
The safest, easiest, and simplest approaches for a hobbyist to image
the junctions of an old, low-end microchip involve
careful physical preparation followed by high-magnification optical
microscopy. This process avoids dangerous chemicals and expensive lab
equipment like Scanning Electron Microscopes (SEMs).
Here is a breakdown of the process and the necessary tools:
Step 1: Delidding the chip safely
Most microchips are encased in epoxy, and the most dangerous step is
removing this protective casing, known as "decapping."

Method: Mechanical Delidding: Instead of using dangerous, concentrated
acids, the safest method is mechanical removal.

Use a heat gun to soften the epoxy casing. Some chips will require
temperatures between 250 and 500°F (120–260°C).
Use a Dremel tool with a small bit to carefully grind away the top of
the casing, creating a pocket over the chip's die.
Alternatively, you can grind the chip from the underside, which may
leave the top bonding wires intact, making the internal layout easier
to see.
Caution: This method can damage the delicate internal wires, making
the chip inoperable, but it is suitable for forensic analysis.

Safety Precautions:

Work in a well-ventilated area to avoid inhaling any fumes from the
heated epoxy.
Wear safety goggles and gloves.
Secure the chip with a vice or clamp so you can use both hands for the Dremel.

Step 2: Preparing for imaging
After the die is exposed, you will need to prepare it for
high-magnification viewing.

Cleaning: Use a solvent like acetone to carefully clean any remaining
epoxy residue from the die. A cotton swab or microfiber cloth can help
with this.
Stitching: Because the field of view under a microscope is very small,
you will need to take multiple overlapping photos and stitch them
together into a single, high-resolution image. Software like Microsoft
Image Composite Editor (ICE) or the open-source Hugin can do this.

Step 3: High-magnification optical imaging
Once the chip is prepared, you can use a high-magnification microscope
to capture images of its junctions.

Required Magnification: The scale of the junctions on a low-end
microchip is typically in the micron range, requiring a microscope
with at least 100x magnification. An inexpensive digital or standard
compound microscope can work, though a dedicated electronics
microscope is ideal.
Open Hardware Approach (OpenFlexure): You can use a robotic microscope
like the OpenFlexure for a precise, automated scanning and stitching
process.

Fit the OpenFlexure with a high-resolution objective lens.
Use the automated stage to scan the exposed die systematically,
capturing a grid of images.
Use image stitching software (like Fiji/ImageJ or Hugin) to combine
the photos into a single, high-resolution panorama.

Simple Microscope Setup: For a less professional approach, a basic
digital microscope or a smartphone-based microscope can be used.

Smartphone + Microscope Attachment: A simple smartphone accessory with
a powerful lens can get you some magnification, though image quality
will vary. Some versions can reach a resolution of 1 µm with a wider
field of view.
Basic Digital Microscope: Inexpensive digital microscopes provide a
live feed to your computer, making it easy to capture images. This
will require manual positioning and stitching.

Summary: Comparison of imaging methods
Method SafetyEaseBest For
Mechanical Delidding + OpenFlexureHigh: Avoids dangerous
chemicals.Medium: Requires careful grinding, but the OpenFlexure
automates the imaging.Creating high-resolution, stitched, full-die
images for detailed analysis.
Mechanical Delidding + Simple Digital MicroscopeHigh: Avoids dangerous
chemicals.Easy: Simple to use, but requires manual labor for scanning
and stitching.Quick, manual exploration of the chip surface without
the need for precise automation.
Mechanical Delidding + Smartphone MicroscopeHigh: Avoids dangerous
chemicals.Easiest: Use your phone as the camera for a highly
accessible, portable setup.Basic, low-magnification inspection or
quick capture of surface features.
Acid DeliddingVery Low: Extremely hazardous; not recommended for
hobbyists.Difficult: Requires specialized chemicals, safety gear, and
expertise.Professional-level reverse engineering with high precision.


what approaches would you list or imagine for non-destructive high
precision reverse engineering? do any of these have existing success
with public documentation?
Non-destructive approaches for high-precision reverse engineering
Non-destructive high-precision reverse engineering of microchips is a
specialized field largely confined to research institutions, large
corporations, and government agencies. It typically relies on advanced
imaging and measurement techniques that are extremely expensive and
require a deep understanding of materials science and electrical
engineering. While most approaches are not suitable for DIY, several
have been publicly documented and shown success at a professional
level.
X-ray ptychographic laminography

Methodology: This advanced imaging technique uses coherent X-rays to
generate 3D images of a chip's internal structure with nanometer-level
resolution. The technique works by scanning the sample from multiple
angles and recording the diffraction patterns, which are then used to
reconstruct the chip's layers computationally.
Documentation: This technique was pioneered by researchers at the
University of Southern California and the Paul Scherrer Institute in
Switzerland, with their work published in IEEE Spectrum and other
public sources. The technique is explicitly noted as a way to perform
non-destructive reverse engineering.
Feasibility: Due to the need for advanced equipment such as
synchrotron X-ray sources, this technique is not feasible for DIY or
hobbyist use.

Scanning acoustic microscopy (SAM)

Methodology: SAM uses high-frequency ultrasound waves (typically 1–50
MHz) to detect internal anomalies like delamination, voids, or cracks
within a packaged IC. Sound waves are reflected at interfaces where
there is a change in acoustic impedance, such as the boundary between
different materials or an air gap.
Documentation: This technique has been used for decades in failure
analysis within the microelectronics industry and is well-documented
in academic papers and industry resources.
Feasibility: Commercial SAM systems, often with C-mode scanning
acoustic microscopy (C-SAM) for creating 2D images of specific depths,
are available but are prohibitively expensive for hobbyists.

Lock-in thermography (LiT)

Methodology: LiT uses an infrared thermal imaging camera synchronized
with a pulsed power supply to measure the surface temperature of an
IC. This technique enhances sensitivity and can be used to locate hot
spots caused by defects like short circuits in packaged devices. By
analyzing the phase delay of the thermal waves, it can even estimate
the depth of the fault within a 3D structure.
Documentation: This is a standard technique in failure analysis and
has been documented in technical papers and application notes from
companies like Tektronix and Intraspec Technologies.
Feasibility: This is potentially one of the more accessible methods
for an advanced hobbyist. A basic setup could involve an infrared
thermal camera, a pulsed power supply, and software to perform the
"lock-in" detection. However, achieving the high precision required
for IC junctions still requires specialized, high-resolution thermal
cameras.

Terahertz (THz) imaging

Methodology: THz waves can penetrate many non-conductive packaging
materials and can be used for 3D imaging of internal structures. This
method can detect defects like delamination, voids, and even perform
functional testing by observing how the device responds to THz
radiation.
Documentation: This is an active research area, and many papers have
been published documenting its use in non-destructive evaluation and
failure analysis.
Feasibility: Requires very specialized equipment for generating and
detecting THz radiation, making it currently inaccessible for
hobbyists.

Non-destructive imaging using excitonic materials

Methodology: Researchers at the University of Michigan have developed
a technique that leverages the photophysical properties of excitonic
materials to non-destructively image integrated circuits. By coating a
chip with this material and observing its fluorescence properties
under illumination, the topography and structure of buried
interconnects can be mapped.
Documentation: This is a relatively new and specialized research
technique, but it has been publicly announced and documented by the
university.
Feasibility: This is a research-level technique that uses highly
specialized materials and imaging equipment, making it unfeasible for
DIY implementation.

Summary
For hobbyist reverse engineering of ICs, non-destructive,
high-precision methods are mostly out of reach. These methods require
access to highly specialized, expensive equipment found in industrial
or research labs. The most practical approach for hobbyists remains
the careful mechanical decapping of older chips combined with
high-magnification optical microscopy.
While some techniques like lock-in thermography are conceptually
simpler and might be attempted in a rudimentary way, they still face
significant barriers in terms of precision, equipment cost, and
expertise required to achieve meaningful results at the micron or
sub-micron level.


are there businesses that provide any of these such as x-ray or
acoustic imaging, that would work for an old chip from the 90s? how
much might it cost?
Yes, there are businesses that provide non-destructive inspection
services like X-ray computed tomography (CT) and scanning acoustic
microscopy (SAM). These services are commonly used in the electronics
industry for failure analysis, quality assurance, and, in some cases,
reverse engineering
. A chip from the 1990s would be suitable for these types of analysis.
Here's a breakdown of the services and potential costs:
Failure analysis and testing services
These companies cater to the electronics industry and offer a range of
services beyond simple imaging. They can handle an old chip but may
charge extra for a one-off project or a smaller volume of parts.

EAG Laboratories: A major player in the semiconductor industry for
failure analysis, they offer scanning acoustic microscopy (SAM)
services and other advanced material analysis techniques.
Covalent Metrology: This company provides high-resolution SAM services
for visualizing internal structures and detecting defects without
damage.
Sage Analytical Lab: This lab offers X-ray CT and SAM services
specifically for failure analysis and reverse engineering of printed
circuit boards and integrated circuits (ICs).
Delphi Precision Imaging: Offers industrial CT scanning for
electronics and ICs, which is suitable for non-destructive reverse
engineering.

Cost estimation
The cost for a single-chip analysis can be substantial, as these
companies typically serve large corporate clients with budgets for R&D
and failure analysis. Pricing often depends on the level of detail
required, the number of samples, and the time needed to set up and
perform the scan.

Low-end estimate (tens to hundreds of dollars): For a single, basic
scan, you might find a university lab that charges on a per-hour
basis. Scanning Electron Microscope (SEM) time, for instance, can cost
under $100 per hour at some universities. However, this typically
doesn't include the required expertise for setup, interpretation, or
specialized preparation.
Realistic estimate (hundreds to thousands of dollars): A private
commercial lab will likely charge a minimum fee in the hundreds to
thousands of dollars for a single job. A quote from a specialized
failure analysis lab is needed for an accurate price.
High-end estimate (over $5,000): Complex 3D CT scanning, which
involves more data processing and a longer machine time, would be
significantly more expensive. Full reverse engineering jobs that
require advanced techniques and interpretation can run into the tens
of thousands of dollars.

Considerations for a hobbyist

Access: Gaining access to such services can be difficult for a single
individual. Your best bet would be to contact a university or a
smaller lab with an explicit interest in hobbyist projects or
historical microelectronics.
Project scope: Clearly define the objective of your imaging. Do you
need a single 2D X-ray to check for internal defects, a 3D CT scan for
a full model, or a SAM scan for delamination? Having a clear goal can
help reduce costs and guide the conversation with a lab.
Alternatives: As discussed previously, a chemical delidding followed
by a high-resolution optical microscope scan (e.g., with an
OpenFlexure) remains the most practical and accessible method for a
hobbyist seeking detailed images of a 1990s-era microchip. The cost
and technical hurdles are significantly lower, and the results can be
excellent for reverse engineering older, larger-featured chips.


could an imaging service intended for a different use be mispurposed
for this? what makerspaces, universities, or libraries have imaging
services or hardware a member not in a degree program could use?
Using mispurposed imaging services for microchip analysis
Yes, an imaging service intended for another use could potentially be
mispurposed for microchip analysis, particularly for X-ray computed
tomography (CT). The core technology is similar across various
applications, but success depends on the specific machine's
resolution, the chip's materials, and the service provider's
willingness to work with non-standard items.

X-ray CT for Archaeology/Paleontology: Some university labs use
micro-CT scanners for imaging fossils, artifacts, or bones. These
scanners are high-resolution and designed to image objects of a
similar size to a microchip. The challenge would be getting a
non-student project approved, as these facilities prioritize academic
research.
Industrial Metrology CT Scanning: Industrial CT scanning services,
used for quality control in manufacturing, are probably the closest
and most accessible commercial option. They are designed to inspect
the internal geometry of small components, and a microchip would be a
standard item for them to scan. Examples of such businesses include
Delphi Precision Imaging and Haven Metrology Service.
Medical CT Scanning: This is generally unsuitable. Medical CT scanners
are designed for much larger objects (humans) and operate at lower
resolutions than required for microchip features. The beam also has a
lower energy level, which may not penetrate the packaging and denser
materials within the chip.

Accessing imaging services as a non-degree member
Accessing specialized hardware in makerspaces, universities, or
libraries is challenging for individuals not enrolled in a degree
program, particularly for high-end equipment.
Makerspaces

Access: While makerspaces are designed for general member use, they
typically do not have industrial-grade X-ray CT or SAM equipment.
Exception: Some university-affiliated makerspaces, like the NYU
MakerSpace, might have a micro-CT scanner and provide access with
training and a project fee.
Availability: Makerspaces more commonly feature 3D printers, laser
cutters, and basic electronics labs. They may, however, provide access
to high-quality optical microscopes.

Universities

External use policies: Many universities have official policies for
external or non-affiliated use of research facilities, but access is
usually difficult for hobbyists.
Process: Accessing university labs requires a formal written request
to the department chair or lab manager, submission of paperwork, and
payment of user fees. Your project must also not interfere with
regular academic programs and may require specific safety training.
Collaboration: Your best approach is to find a professor or researcher
whose work aligns with your project and propose a collaborative or
paid-for service.
Examples: The University of Washington's X-ray Computed Tomography
Facility is an example of a university facility that supports
interdisciplinary research and potentially external use.

Libraries

Accessibility: Library makerspaces are typically the most accessible
but have the most limited hardware. They are unlikely to have the
advanced imaging equipment needed for microchip reverse engineering.
Offerings: Services are usually limited to consumer-grade 3D scanners,
3D printers, and general electronics tools.

Summary: Options for non-destructive imaging
Option Ease of AccessCostResolution
Industrial Metrology LabLow: Requires formal engagement with a
business.High: Often a minimum fee in the hundreds or thousands of
dollars.High: Designed for inspecting industrial parts, including
electronics.
University Lab (External User)Very Low: Requires permission from a
department and is subject to academic priorities.Medium to High:
Fee-based access, but potentially cheaper than a commercial lab.High:
Micro-CT scanners are capable of high-resolution imaging.
University MakerspaceMedium: Easier than a lab, but requires training
and fees.Low to Medium: Generally cheaper than research labs.High:
Some university makerspaces have high-end CT scanners.
Library MakerspaceHigh: Easiest to access with a library card.Very
Low: Often free or very low cost.Very Low: Not equipped for
high-resolution microchip imaging.
DIY Destructive AnalysisHigh: Accessible to hobbyists.Low: Cost of
tools and an open-source microscope kit.Medium to High: Optical
resolution depends on the microscope and lens.


which university makerspaces would provide a path for a hobbyist to
image an object
Pathways for a hobbyist to image an object at a university makerspace
While most university makerspaces do not offer the highly specialized
equipment required for non-destructive microchip analysis (such as
micro-CT or SAM), there are several pathways a hobbyist might explore,
particularly for imaging small objects with high-resolution optical
microscopes or other less exotic methods.
1. University makerspaces with external access
Some universities offer programs or memberships for community members
and alumni to use their facilities, typically for a fee. These
programs are most likely to provide access to standard makerspace
tools (3D printers, laser cutters, electronics benches) and may have
high-end optical microscopes. The best way to find these opportunities
is to contact the university's makerspace directly.
Example programs and facilities:

University of Washington (UW): Some UW facilities, like the GIX
Prototyping Labs, have industrial-grade equipment and offer community
memberships.
University of Michigan: Has a large and diverse makerspace program,
but access for non-students may vary by department.
Moreno Valley College (MVC): The iMAKE STEM Innovation Center is
explicitly inclusive of community members.

2. Specialized academic departments and core facilities
Departments with research labs for materials science, mechanical
engineering, or biomedical engineering are where high-resolution
imaging equipment is located. These facilities are generally not part
of makerspaces but can sometimes be accessed by non-affiliated
individuals.
Strategy:

Identify relevant departments: Search the university website for
"high-resolution imaging facility," "materials analysis lab," or "core
microscopy facility."
Contact the lab manager: Send a concise, polite email explaining your
project, its non-commercial, hobbyist nature, and your willingness to
pay for staff time and equipment usage.
Offer collaboration: Frame your project as an interesting research
opportunity, especially if the chip is historically significant.

3. University libraries with technology resources
Many university libraries operate makerspaces or "tech hubs" that are
more accessible than a research lab. While they won't have CT
scanners, they often have high-quality optical and digital
microscopes, 3D scanners, and advanced computing resources for image
processing.
Potential resources:

Advanced digital microscopes: Libraries sometimes acquire high-quality
digital microscopes for use with museum specimens or manuscripts,
which could be adapted for a small electronic object.
High-resolution 3D scanning: Some libraries have advanced 3D scanners
that can capture the surface geometry of an object with high
precision, which could be useful for initial reverse engineering
steps.
Image processing software: Libraries often provide access to powerful
image-processing and stitching software like Adobe products or
open-source alternatives like Fiji/ImageJ on their computers.

4. University clubs and student organizations
Some university clubs related to electrical engineering, robotics, or
computer science might have equipment that could be useful. Building a
connection with a student group could be a more informal way to access
resources.
Approach:

Search for relevant clubs: Look for student organization directories
on the university website.
Attend events: Participate in public-facing events or open houses to
meet members and learn about their projects and resources.
Offer expertise: If you have skills that could benefit the club, offer
to volunteer in exchange for some access.

How to maximize your chances

Network: Attend university open houses, seminars, and other public
events. Networking with faculty and students can build trust and open
doors that an official request cannot.
Be specific: Have a very clear idea of what you need. Instead of "I
need to image a microchip," say, "I need to capture a high-resolution,
stitched, panoramic optical image of a mechanically delidded microchip
die."
Start with optical: Focus on optical microscopy first. It is the most
accessible high-resolution imaging technique available to non-academic
users. Many university labs and makerspaces will have excellent
optical microscopes.
Be prepared to pay: Access to university equipment is rarely free for
community members. Be prepared to cover material and usage fees, and
potentially a technician's time.


I’m looking briefly and it looked to me like the openuc2 would be more
conducive to tomography than the openflexure. it looked like optij was
a software rather than hardware project. do you know what existing
work there is around open hardware optical tomography? can you see
within some objects accurately with the right wavelengths or
processing?
OpenUC2 versus OpenFlexure for optical tomography
Your assessment that OpenUC2 is more conducive to optical tomography
than OpenFlexure is correct.
Feature OpenUC2OpenFlexure
ModularityHigher. The cube-based system is designed for maximum
reconfigurability. You can easily add, remove, and reposition optical
and mechanical components needed for tomographic methods like optical
projection tomography (OPT).Lower (by comparison). While modular, the
OpenFlexure is built around a more monolithic flexure stage. It is
highly effective for its intended automated microscopy but is not as
flexible for reconfiguring the optical path for tomography.
Stage ControlMore adaptable. UC2 uses more standard CNC components for
its stages, allowing for easier integration of rotation stages and
other motions necessary for tomography.Less adaptable for rotation.
The flexure stage is highly optimized for linear X, Y, and Z motion.
Adding a rotation axis for tomography requires more significant
modification to the base design.
Existing Tomography WorkMore documented. The modularity of UC2 has
attracted several projects that adapt it for advanced imaging,
including super-resolution fluorescence microscopy. This indicates a
more established community of developers working on complex optical
configurations.Less documented. OpenFlexure is widely used for
standard and automated microscopy but is less frequently featured in
open-source tomography projects compared to platforms like UC2.
OptiJ is a hardware and software project
You are mistaken that OptiJ is only a software project; it is both a
hardware and software platform for optical projection tomography
(OPT).

Hardware: OptiJ hardware is designed using off-the-shelf and
3D-printed parts, often based on or inspired by Flexscope designs. It
includes a rotation stage, a camera, a telecentric relay lens, and
specific illumination components.
Software: The OptiJ software is an ImageJ/Fiji plugin library for
tomographic reconstruction. It includes algorithms for calibration and
GPU-accelerated reconstruction.

Existing open hardware for optical tomography
Several projects and research efforts focus on open hardware for
optical tomography, often leveraging established platforms like UC2.

OPT plate for microscopes: One approach involves building an adapter
plate to add sample rotation to an existing inverted or upright
microscope, enabling optical projection tomography (OPT).
OpenSPIM/OpenSpin: These are more advanced platforms for Selective
Plane Illumination Microscopy (SPIM), which is a form of light-sheet
microscopy used for 3D imaging. SPIM is related to tomography as it
also builds a 3D volume by computationally assembling 2D slices.
Low-cost OCT: As highlighted in search results, projects to create
low-cost Optical Coherence Tomography (OCT) systems exist, often using
3D-printed parts for components like spectrometers. However, these are
highly specialized systems for a different type of optical tomography.

Can you see inside objects with optical tomography?
It is possible to "see inside" certain objects using optical
tomography, but with significant limitations related to light
scattering.

Optical Coherence Tomography (OCT): OCT works like a high-resolution
optical radar, using near-infrared light to capture cross-sectional
images. It can see into objects, but only to a shallow depth
(typically 1–3 mm) because light scatters intensely in most materials.
The resolution is very high, making it useful for imaging biological
tissues like the retina and skin.
Optical Projection Tomography (OPT): OPT requires the sample to be
made optically transparent, a process called "clearing". Once cleared,
the sample is rotated, and a camera captures images from the side.
Software then computationally reconstructs the 3D volume from these
projections. This is highly effective for cleared biological samples
but not suitable for opaque materials like microchips.
Wavelengths and Imaging Depth: Using longer wavelengths of light
(further into the near-infrared spectrum) can increase the penetration
depth of light into scattering materials, but only to a limited
extent. Different wavelengths will also provide different contrasts
based on the material's absorption properties.
Processing: Computational imaging techniques, including advanced
algorithms and AI, are crucial for modern optical tomography. They
help correct for issues like scattering and stitch together images to
reconstruct the 3D volume.

For hobbyist-level reverse engineering of a microchip, optical
tomography is not a practical solution for non-destructive, internal
imaging. The material is opaque, and even with optimal wavelengths and
processing, optical light cannot penetrate deep enough to resolve
internal features. Your initial approach of using destructive (but
safe) mechanical decapping combined with a high-resolution optical
microscope is still the most feasible and reliable method for your
project.


why would optical tomography not work for a chip when they are so
small, especially if partially decapped?
Optical tomography is unsuitable for imaging microchips, even
partially decapped ones, because
the light used for imaging cannot penetrate the silicon die to a
sufficient depth. The physical and optical properties of the
semiconductor material, not the overall size of the chip, are the
limiting factor.
The opacity of silicon to visible light

Bandgap energy: Silicon is an indirect bandgap semiconductor. This
means it has a property that makes it absorb photons of visible light
(400–700 nm wavelength) very effectively. The absorbed energy excites
electrons in the material, which is the basis for how solar cells
work.
High absorption and scattering: This strong interaction with visible
light means that photons are rapidly absorbed or scattered within the
first few micrometers of the silicon surface. The intricate structures
of a microchip lie beneath a thin, glass-like passivation layer, well
within this optically opaque region.

Silicon photonics - Wikipedia
Physical properties * Optical guiding and dispersion tailoring.
Silicon is transparent to infrared light with wavelengths above about
1.1 micrometres. Silicon a...
Wikipedia
At Home Science: Silicon and Infrared Radiation
May 4, 2020 — Something that is transparent in the visible spectrum,
like glass, is completely opaque in the IR. Similarly, something like
elemental silicon (Si...
YouTube·Mike Davis
2:50
Near-infrared light and limited transparency
While silicon is opaque to visible light, it becomes more transparent
to longer wavelengths in the near-infrared (NIR) spectrum (above 1.1
micrometers). However, this transparency is limited and presents its
own challenges for imaging.

Absorption and scattering: Although scattering is reduced compared to
visible light, it is not eliminated. Impurities, defects, and the
multi-layered structure of a chip still cause significant scattering
and absorption, limiting how far the light can penetrate and return a
clear signal.
Imaging depth: For imaging techniques like Optical Coherence
Tomography (OCT), the depth of penetration is still limited, typically
to a few millimeters even in relatively translucent materials like
biological tissue. A dense, opaque, and highly reflective material
like a microchip presents an even greater obstacle.
Resolution trade-off: The use of longer wavelengths to improve
penetration comes at the cost of resolution. Because of the
diffraction limit, the resolution of an optical image is directly
related to the wavelength of the light used. Using infrared light
would provide a lower maximum resolution than a visible light
microscope, which is the opposite of what is needed for examining
micro-scale junctions.

The destructive-then-image approach
This is why the standard, accessible method for reverse engineering
microchips is to use a destructive process (delidding and potentially
further etching) to expose the layers of the chip.

Expose the layer of interest: The physical material above the target
layer must be removed.
Optical imaging: Once exposed, a visible light optical microscope can
be used to image that specific layer with high resolution.
Repeat: This process is repeated for each successive layer of the
chip, with images then compiled to reverse engineer the full 3D
structure.

Even a partially decapped chip still has layers of opaque silicon and
other materials obscuring the junctions of interest. Optical
tomography, regardless of wavelength or the small size of the chip,
cannot overcome the fundamental material properties that block light
penetration.


it sounds like an experienced researcher might be able to combine a
technique that can image above wavelength resolution, like
psychographic, with high-wavelength light, to image a chip
Your intuition about combining advanced techniques with longer
wavelengths to image microchips is correct, but only at a highly
specialized, research-lab level, not for a home setup. An experienced
researcher in a properly equipped lab can perform such feats, but it
relies on far more than just a typical microscope.

How a researcher would combine these techniques
The strategy involves using wavelengths that silicon becomes
transparent to (near-infrared, extreme ultraviolet, or X-ray) and then
employing a computational imaging technique like ptychography to
overcome the limitations of those wavelengths.

Select the appropriate wavelength. As previously discussed, visible
light is absorbed by silicon. However, silicon is relatively
transparent to light with wavelengths longer than 1.1 micrometers, in
the near-infrared (NIR) spectrum. For even higher penetration and
resolution, researchers turn to Extreme Ultraviolet (EUV) or X-rays.
Combine with ptychography. Ptychography is a computational imaging
technique that works differently from a traditional microscope.
Instead of relying on a physical lens to form an image, it uses an
algorithm to reconstruct an image from a series of overlapping
diffraction patterns. The technique is therefore less constrained by
the limitations of physical optics.
Use specialized light sources. A hobbyist cannot simply replace a
visible light bulb with an infrared or EUV source. Research-grade
implementations require highly stable and high-flux light sources,
such as synchrotron X-ray sources or high-harmonic generation (HHG)
sources for EUV. Some approaches use modified table-top setups, but
even these are far more advanced than typical lab equipment.
Process the data. The true power of these systems lies in the
sophisticated reconstruction algorithms that process the vast amount
of diffraction pattern data captured by the detector. This allows the
system to build up a high-resolution, complex image of the object's
interior.

Is it feasible for a hobbyist?
No. This approach remains out of reach for a hobbyist, even an
experienced one, because of these factors:

Extreme cost: Access to synchrotron light sources or table-top EUV
sources is prohibitively expensive, costing millions of dollars for
the equipment and ongoing operational expenses.
Specialized materials: The process involves highly specialized
components, detectors, and materials that are not available
commercially for home use.
Safety risks: The use of X-rays and high-power lasers presents
significant health and safety risks that cannot be mitigated in a home
environment.
Complex software: The algorithms used are computationally intensive
and require specialized knowledge in optics, signal processing, and
material science to interpret the resulting images.

Publicly documented research successes
While these methods are not DIY-friendly, they have been successfully
documented in published research:

High-resolution X-ray ptychography: The Paul Scherrer Institute (PSI)
in Switzerland has publicly documented achieving a 3D view of a
microchip with a resolution of 4 nanometers using X-ray ptychography.
This involved using a synchrotron X-ray source.
Extreme Ultraviolet (EUV) ptychography: Research has shown that
table-top EUV ptychography systems can image features as small as 45
nm, demonstrating the technique's potential for semiconductor
inspection.
Infrared microscopy: Techniques combining infrared microscopy with
super-resolution processing have shown some success in imaging
features in and around silicon. However, the depth and resolution
capabilities remain inferior to X-ray methods for dense microchip
structures.

In short, an experienced researcher can combine these techniques to
image a chip non-destructively, but this is done at the extreme
forefront of scientific research, not within the realm of hobbyist
projects.


are you aware of modern psychography or fourier imaging research?
could a dedicated hobbyist use infrared or near-infrared light to
image a chip? what would be needed?
Ptychography and Fourier imaging research
Modern ptychography and Fourier imaging research is a highly active
and rapidly advancing field at the intersection of optics,
computation, and materials science. Researchers are making progress on
multiple fronts:

Faster imaging: Advances in algorithms and hardware are making Fourier
Ptychographic Microscopy (FPM) faster, allowing for dynamic imaging of
live biological processes.
Deep learning integration: AI and deep learning are being integrated
with FPM to improve reconstruction quality, speed, and robustness in
the presence of noise and aberrations.
3D imaging: Researchers are developing methods to extend FPM into
three dimensions, though achieving isotropic resolution remains a
challenge.
Hardware and modality extensions: FPM is being adapted for various
hardware setups, such as using fiber optic arrays for illumination,
and is being applied to different imaging modalities beyond
traditional microscopy.

Infrared imaging of a chip by a dedicated hobbyist
It is possible for a dedicated hobbyist to perform near-infrared (NIR)
imaging of a chip. In fact, many digital camera sensors (like those
found in webcams and DSLRs) are sensitive to NIR light. However,
imaging a chip requires more specialized equipment and techniques than
general NIR photography.
What would be needed

Modified camera: Most consumer digital cameras have a built-in "hot
mirror" filter that blocks NIR light to prevent color distortion. You
would need to either remove this filter or use a camera specifically
designed for NIR imaging. Older DSLRs may have weaker filters or can
be professionally modified.
NIR light source: Because NIR is a reflected energy like visible
light, you would need a powerful, consistent NIR light source. This
could be an array of NIR LEDs or an infrared floodlight. The light
source would need to be uniform and free of speckle artifacts, which
can interfere with the imaging.
NIR bandpass filters: To isolate the NIR light and block unwanted
visible light, you would need specific bandpass filters that only
allow the desired NIR wavelengths to pass through. This is crucial for
achieving high-contrast images.
Specialized optics: Standard glass lenses are not optimized for
infrared and may introduce aberrations. High-quality NIR imaging
requires lenses designed specifically for NIR wavelengths.
Alternatively, some amateur projects use Cassegrain-style mirror
optics to avoid passing light through glass.
Microscope platform: To achieve the necessary magnification and
precision, you would need a stable and controllable platform. The
OpenFlexure microscope could be modified for this purpose by swapping
the visible-light optical path for NIR-compatible optics, including a
Cassegrain objective if needed.
Image analysis software: As with FPM, processing the raw images is
crucial. Software like Fiji/ImageJ or custom scripts are needed for
image acquisition, stitching, and contrast enhancement, as the
contrast of NIR images can be low.

Challenges for a hobbyist

Emissivity variations: Different materials within the chip's structure
reflect and emit heat differently, which can complicate image
interpretation. Calibration is required to account for these
variations.
Resolution trade-offs: While NIR penetrates silicon, the longer
wavelength limits the maximum theoretical resolution. You would not
achieve the same level of detail as a visible light microscope on a
chemically delayered chip.
Limited availability and cost: High-quality NIR optics and specialized
cameras are expensive and not widely available to hobbyists.
Complexity: Building a high-quality NIR microscope is a complex and
time-consuming project. It requires a solid understanding of optics,
electronics, and software development.

What about Fourier ptychography for a hobbyist?
FPM is more accessible for a hobbyist than other advanced techniques
like X-ray ptychography. The hardware typically consists of a standard
microscope with a low-cost, off-the-shelf LED array added underneath
the sample. The real challenge lies in the sophisticated computational
reconstruction algorithms needed to recover the high-resolution image
from a series of low-resolution ones.
Combining these two approaches—infrared FPM—is a research-level
project. While theoretically possible, it would involve significant
challenges in instrumentation and computation. However, a dedicated
hobbyist could certainly attempt a simpler near-infrared microscope
without the full FPM capability, relying on basic optical imaging
through the silicon substrate.


it sounds like it make sense to start with applying ptychographic
approaches to visible light and a transparent object. is anybody doing
this? do you need an lcd in front of your camera to craft a probe or
how does/would it work? is there public open software? would it work
with the openuc2 or other open hardware projects?
Applying Fourier ptychography to transparent objects
Your intuition to apply Fourier ptychography to visible light and
transparent objects is precisely what the technique was designed for.
Fourier Ptychographic Microscopy (FPM) excels at creating
high-contrast, high-resolution images of transparent specimens like
living cells without the need for chemical stains. It does so by
computationally recovering the phase information that is lost in
traditional brightfield microscopy.
How Fourier ptychography works with a programmable aperture
One common method for FPM, especially in research, is through
programmable aperture microscopy.

Instead of using an array of LEDs, a spatial light modulator (SLM),
such as an LCD screen, is placed in the microscope's illumination or
imaging path.
The LCD acts as a programmable condenser aperture, displaying
different patterns to selectively modulate the light distribution
reaching the specimen.
The setup captures a series of low-resolution images under different patterns.
An iterative phase retrieval algorithm then stitches these images
together in Fourier space to reconstruct a high-resolution image,
including both amplitude and phase information.

Open software for Fourier ptychography
Yes, there is public, open-source software available for FPM
reconstruction. While the algorithms can be complex, several projects
have made them more accessible.

FPM app: Developed by researchers at the Warsaw University of
Technology, this is an open-source MATLAB application with a graphical
user interface (GUI) for FPM reconstruction. It is designed to be
simple and intuitive for non-experts.
PtyLab.m/py/jl: This is a cross-platform, open-source toolbox for both
conventional ptychography and FPM. It is available in MATLAB, Python,
and Julia, making it accessible to users with different programming
backgrounds. It also includes tutorials and test datasets.
Tian Lab at Boston University: This lab has provided open-source
MATLAB code for FPM reconstruction based on an algorithm developed by
its researchers.
Laura Waller Lab at UC Berkeley: This lab provides open-source code
for FPM reconstruction, including implementations of various iterative
phase retrieval algorithms.

Compatibility with open hardware
FPM is highly compatible with open hardware projects, as it is a
computational technique that can be adapted to many microscope setups.

OpenUC2: The modular nature of the OpenUC2 framework makes it an
excellent platform for building a custom FPM setup. You could easily
integrate a low-cost LED array or an LCD aperture into the system to
achieve the variable illumination required. The strong community
around UC2 also means others have likely explored or documented
similar adaptations.
OpenFlexure: While less modular for optics than UC2, a variant of the
OpenFlexure could be adapted by replacing the standard illuminator
with an LED array. The precise, motorized stage is not essential for
the core FPM principle but would be beneficial for handling larger or
more complex specimens.
General compatibility: The core FPM principle relies on a camera and a
means of providing variable-angle illumination. As demonstrated by the
NASA project, a portable FPM device can be integrated with a standard
microscope using an LED array. This shows that FPM is highly adaptable
to various open hardware configurations.


I’m using on low-end, terminal-based Linuxsystems. is there software
or code that would work well for me? is anybody trying ptychography
already on a platform like the openuc2 or openflexure?
Open-source software for FPM on terminal-based Linux
Yes, there is open-source software and code that can be run on
low-end, terminal-based Linux systems for Fourier Ptychographic
Microscopy (FPM). Python is an excellent choice for this, as it is
well-supported on most Linux distributions, including those for
single-board computers like the Raspberry Pi.

Pty-Chi: Developed at the Advanced Photon Source, Pty-Chi is a
Python-based ptychography reconstruction library that can run on
various systems. While it is primarily used for X-ray ptychography,
the core principles of the iterative reconstruction algorithms are
shared. It can leverage a GPU for faster processing but can also run
on a CPU, albeit more slowly.
PtyRAD: Another Python-based framework, PtyRAD, is highly flexible and
computationally efficient for electron ptychography and other
applications. Its modular design and reliance on PyTorch make it
adaptable to different hardware configurations.
Waller-Lab/FPM: The Laura Waller Lab at UC Berkeley provides
open-source Python code for FPM reconstruction. It is well-documented
and a good starting point for understanding the underlying algorithms.
RaspberryPi_FPScope: This project specifically includes a Python
script (DATA_CAPTURE.py) for collecting FPM images using a Raspberry
Pi, a Raspberry Pi camera, and an LED array. It also provides links to
the reconstruction code.

Applying ptychography to OpenUC2 or OpenFlexure
Yes, researchers and hobbyists have integrated FPM with both the
OpenUC2 and OpenFlexure platforms.
OpenUC2 and FPM
The modularity of the OpenUC2 system makes it a natural fit for FPM.

Existing work: The UC2 platform has been used to build computational
microscopes, and FPM is a common adaptation due to the ease of
integrating an LED array into the system.
Assembly: A user could easily construct an FPM setup by placing a
modular LED array or an LCD aperture below the sample stage.
Control: The UC2 software platform can be adapted to sequence the LED
illuminations, and the camera interface can be scripted using Python
to capture the required images for the FPM algorithm.

OpenFlexure and FPM
While designed for automated microscopy, the OpenFlexure can also be
modified for FPM.

Adaptation: The illumination module is typically an LED, but this can
be replaced with a low-cost, 3D-printed LED array.
Scripting: The OpenFlexure's Python client allows for scripting of
experiments. A user could write a Python script to control the LED
illumination sequence, capture images with the Pi camera, and then
reconstruct the images using a separate process.
Existing work: The RaspberryPi_FPScope project, which is explicitly
tied to a Raspberry Pi camera and an LED array, is a direct example of
integrating the necessary components for FPM into an open-source
hardware system.

Programmable aperture using an LCD and Linux
Using an LCD screen as a programmable aperture for FPM is also
possible on a low-end Linux system.

LCD control: An LCD display connected to a Raspberry Pi can be
controlled to show a variety of patterns, acting as a programmable
aperture for the FPM setup.
Linux drivers: Standard Linux graphics drivers can manage the display
of these patterns, making it possible to control the illumination
entirely from the command line.
Terminal-based control: You can use Python scripts and libraries like
Pillow (PIL Fork) to generate the patterns and display them on the LCD
via the system's framebuffer, all from the Linux terminal.

Overall recommendation for a hobbyist on a low-end system

Hardware: Build an OpenUC2 or OpenFlexure with a Raspberry Pi and a
high-quality Raspberry Pi camera. Integrate a low-cost LED array or an
LCD screen below the sample.
Software: Use Python on your Linux system. This is the most flexible
approach for both hardware control and reconstruction. The
RaspberryPi_FPScope code is an excellent starting point, as it
provides a complete hardware and software example. You can also
explore the Python-based libraries from the Waller Lab or Pty-Chi,
which offer more advanced reconstruction algorithms.
Process: Use command-line Python scripts to sequence the LED array or
LCD patterns, capture the images, and then run the reconstruction
algorithm. This avoids the need for a graphical user interface and is
well-suited for a low-end, terminal-based system.


are there existing projects or writeups for computational optical
tomography for hobbyists
Yes, there are existing open-hardware projects and write-ups for
computational optical tomography aimed at hobbyists and educational
purposes. These projects focus on demonstrating the principles of
tomography using visible light, which is safer and far less expensive
than X-ray or neutron-based systems.

Kitchen-Based Light Tomography (KBLT)
KBLT is a well-documented open-source project specifically designed as
an educational tool for tomography.

Platform: KBLT uses a Raspberry Pi to control stepper motors for
rotating and translating samples, a webcam for imaging, and an LED
flashlight as the light source.
Hardware: The hardware can be 3D-printed and assembled with common components.
Software: The system uses Python scripts for controlling hardware and
acquiring images. Reconstruction can be done with open-source tools
like TomoPy and ImageJ/Fiji.
Purpose: KBLT is designed to mimic the workflow of industrial-scale
X-ray and neutron tomography, providing hands-on experience with a
minimal investment.

OptiJ
OptiJ is an open-source optical projection tomography (OPT) system
that can image transparent specimens up to 13 mm in size.

Platform: It is based on off-the-shelf and 3D-printable optical
components, controlled by an open-source software suite.
Cost: While initially costing around $3,000 without a high-end camera,
the hardware design is open and modular, allowing for cost reductions
using cheaper components.
Software: The software is an ImageJ/Fiji plugin, which provides a
user-friendly interface for reconstruction. It is accelerated for
modern CPUs and GPUs.
Purpose: The goal of OptiJ is to democratize OPT, allowing for
biomedical research without expensive commercial equipment.

Compatibility with other open-hardware platforms
The hardware modules for open-source tomography are often compatible
with or adaptable to other open-hardware microscopy projects.

OpenUC2: As you noted, the modular, cube-based design of OpenUC2 is
highly conducive to adaptations for computational imaging, including
tomography. You could use UC2 parts to build the sample stage and
camera mount for a KBLT- or OptiJ-style system.
Blu-ray based microscopes: Some hobbyist projects have demonstrated
using parts from a Blu-ray or HD-DVD player to build laser-scanning
microscopes and other imaging devices. These can be integrated with
Arduino or Raspberry Pi controllers to perform rotational scanning for
tomography.

General approach for a hobbyist
A hobbyist with a terminal-based Linux system could follow these
general steps to start a computational optical tomography project:

Hardware setup: Build or acquire a rotational stage, which can be done
with a stepper motor and a 3D-printed platform. Mount a light source
(like a powerful LED flashlight) and a camera (a Raspberry Pi camera
or USB webcam works well).
Acquisition software: Use a Python script to control the motor and
trigger the camera at fixed angular steps. This can be done using the
Raspberry Pi's GPIO pins and Python libraries like RPi.GPIO and
OpenCV.
Image reconstruction: The images must then be processed to reconstruct
the 3D volume. Open-source libraries like TomoPy (which works with
Python) or the ImageJ/Fiji plugins from OptiJ can perform this task.

For transparent objects, computational methods allow you to go beyond
simple attenuation to recover phase information, revealing details
that are invisible with a regular microscope. For non-transparent
objects (as tested in KBLT), the light will not pass through, and the
reconstruction will produce an image of the outer surface and any
internal voids only if light is able to get through.


is there documentation on using a general-purpose microscopy platform
for ptychographic optical imaging in 2 or 3 dimensions of any kind
that would be cheaper to buy than the optij platform?
Yes, there is well-documented research and open-source projects for
performing Fourier ptychographic microscopy (FPM) that are
significantly cheaper than building an OptiJ platform
. These solutions typically use a standard microscope, a low-cost LED
array for illumination, and a consumer-grade camera and computer for
image acquisition and reconstruction.
Platforms for affordable Fourier ptychography
A number of projects demonstrate how to build or modify a
general-purpose microscope for ptychographic imaging.

Low-cost LED array + Raspberry Pi: One project published in Nature in
2019 describes a system using a Raspberry Pi 3, a Pi color camera, and
a low-cost LED array.

Cost: The total component cost was under $150.
Performance: The system achieved sub-micron resolution with a wide
field of view.
Features: It included robust algorithms to compensate for aberrations
from the cheap camera lens and misalignments in the 3D-printed
structure.
Documentation: The project provides open-source access to the 3D
models and assembly instructions.

University lab implementations: Research labs at universities like
Boston University and the University of Connecticut provide
documentation on how to adapt standard microscopes for FPM. Their
examples replace the microscope's illumination unit with an
off-the-shelf LED array.
OpenUC2: Given its modular design, the OpenUC2 system is highly
suitable for building an FPM setup. While specific open-source
documentation might not exist for every possible variation, the
modularity makes it straightforward to design and attach an LED array
below the sample stage. The project's documentation provides all the
base building blocks and instructions for assembly.

Cost comparison: FPM vs. OptiJ
A budget FPM system is considerably less expensive than the OptiJ
platform, largely due to the differences in their hardware and
purpose.
Feature Low-Cost FPM SystemOptiJ Platform
HardwareA Raspberry Pi computer, a standard camera (like a Pi Camera),
a low-cost LED array, and a simple objective lens.Requires a precise
rotational stage, a high-quality camera, specific optics, and a
computer for reconstruction.
CostUnder $150 in components.Initially around $3,000 without the
high-end camera, though adaptable.
Imaging MethodUses computational techniques to synthesize a
high-resolution, wide-field image from multiple low-resolution
captures.Uses optical projection tomography (OPT) to capture 2D
projections of a cleared sample, which are then used to reconstruct a
3D volume.
PurposePrimarily for 2D, high-resolution imaging of transparent
objects. Can be extended to 3D via advanced algorithms.Dedicated to 3D
volumetric imaging of transparent, cleared samples.
Adapting for 3D ptychography
While standard FPM primarily produces high-resolution 2D images,
several techniques exist for extending it into 3D, including:

Multi-slice modeling: This method divides the sample into thin slices
and models the wave field propagation through each.
Structured illumination: Combining FPM with structured illumination
can help achieve 3D resolution.
Neural networks: Some research uses deep learning to improve
reconstruction accuracy and speed for 3D FPM.

Getting started with FPM
For a hobbyist, starting with a 2D FPM system is a good way to learn
the ropes. The necessary components are inexpensive and easy to find,
and the reconstruction algorithms can be run on a low-end Linux
machine. Projects that use a Raspberry Pi and a Pi Camera are the most
direct path.


I’d be buying the microscope platform too. I mention a general
platform because I also want to use the microscope as opposed to
research ptychography. what’s the cheapest I’d be looking at to be
able to try ptychography from scratch but also have a useful
microscope?
Cost estimate for a general-purpose microscope with ptychography
The cheapest you can expect to build a functional microscope capable
of ptychography is in the $150–$300 range. This assumes you already
have a 3D printer and a low-end Linux computer (like a Raspberry Pi
4). If you need to buy a computer, the cost will increase.
This estimate is based on modifying an existing open-hardware
platform, like the OpenUC2 or OpenFlexure, to include the necessary
components for Fourier Ptychographic Microscopy (FPM).
Component breakdown and estimated cost
Component Cost EstimateNotes
Microscope Platform$75–$150A base kit for either the OpenUC2 or
OpenFlexure platform, including the necessary 3D-printed parts and
fasteners.
Raspberry Pi$35–$60A Raspberry Pi 4 Model B is a good choice for FPM,
offering enough processing power for the reconstruction algorithms.
High-Quality Camera$50–$60The Raspberry Pi High Quality Camera is a
standard choice for open-hardware microscopy, providing a good sensor
for FPM.
Objective Lens$10–$50A cheap objective lens can be used, as the
ptychography algorithm will correct for many aberrations. The cost
depends on the magnification needed.
LED Array$10–$25A low-cost LED array is needed for programmable
illumination. Projects have used arrays designed for other purposes,
like the Pimoroni Unicorn HAT HD.
Miscellaneous$10–$20Includes cables, power supply, and other small items.
Total~$190–$365The total cost can vary depending on the platform and
specific components chosen.
OpenUC2 versus OpenFlexure for FPM
Feature OpenUC2OpenFlexure
ModularityHigher. The cube-based design allows for easier integration
of the LED array and other components needed for FPM. You can build a
system with a specific focus on FPM while retaining the flexibility to
reconfigure it for other purposes.Good. It can be adapted, but
integrating the LED array may be less straightforward than with UC2.
The core design is optimized for a motorized scanning stage, which is
useful but not essential for FPM.
CostPotentially cheaper. Because UC2 is so modular, you can choose and
assemble only the necessary parts, potentially reducing the overall
cost.Slightly higher for kits. Kits are available commercially, which
can simplify the assembly process but may increase the overall cost
compared to a fully custom-built UC2.
Use as General MicroscopeExcellent. You can use the UC2 system with a
standard brightfield illuminator for general microscopy and switch to
the LED array for ptychography.Excellent. The standard OpenFlexure is
a highly functional and useful microscope on its own.
Software setup on a low-end Linux system

Python Scripts: As discussed previously, you can use Python on a
terminal-based Linux system (like Raspberry Pi OS Lite) to control the
camera and LED array.
Reconstruction: The reconstruction algorithms are also available in
Python. You will need to install the necessary libraries and
dependencies (e.g., NumPy, SciPy).

Getting started

3D Print: Print the necessary parts for your chosen platform (OpenUC2
or OpenFlexure).
Assemble: Build the base microscope.
Integrate FPM: Add the LED array and camera, writing the necessary
scripts to control the system.
Calibrate and Reconstruct: Capture the images and run the
reconstruction algorithms to produce your first ptychographic image.


is the openuc2 home-buildable like the openflexure or do you have to
buy it? is it less precise? how do the communities differ in their
interests and shared work?
10 sites
Here are top web results for exploring this topic:
National Institutes of Health (NIH) | (.gov)·https://pmc.ncbi.nlm.nih.gov
Robotic microscopy for everyone: the OpenFlexure microscope
Here we present an open-source, 3D-printed, and fully-automated
laboratory microscope, with motorised sample positioning and focus
control.
bioRxiv·https://www.biorxiv.org
UC2 – A Versatile and Customizable low-cost 3D-printed ...
With UC2 (You-See-Too) we present an inexpensive 3D-printed microscopy
toolbox. The system is based on concepts of modular development, ...
Optica Publishing Group·https://opg.optica.org
Multi-modal microscopy imaging with the OpenFlexure Delta ...
This microscope, having been open-source since its conception, has
already been built and tested by researchers around the world,
promoting a community of ...
Wiley·https://analyticalsciencejournals.onlinelibrary.wiley.com
A review on low‐cost microscopes for Open Science
OpenFlexure is a system that under a structure created by 3D printing
provides modularity and flexibility to offer different microscopy ...
ResearchGate·https://www.researchgate.net
Robotic microscopy for everyone: the OpenFlexure microscope
The OpenFlexure microscope has been designed to enable low-volume
manufacturing and maintenance by local personnel, vastly increasing
...
Association for Progressive Communications·https://www.apc.org
OpenFlexure on making microscopes an open and accessible ...
Now used in over 50 countries across every continent, the OpenFlexure
project uses 3D printers and readily available components to produce
open- ...
ResearchGate·https://www.researchgate.net
(PDF) Modern Microscopy with the Web of Things
Here we present the OpenFlexure Microscope software stack, based on a
modern Web of Things architecture providing a robust, extensible, and
open ...
The Glasgow Insight into Science and Technology·https://the-gist.org
How the OpenFlexure Microscope is Enabling Microscopy for ...
As a digital microscope, the OpenFlexure Microscope allows samples to
be shared virtually, letting doctors consult remotely from anywhere in
the ...
Seeed Studio·https://www.seeedstudio.com
Meet OpenUC2 10x AI Microscope, Making Advanced ...
OpenUC2 10x AI Microscope is a collaborative innovation between
openUC2 and Seeed Studio that brings advanced, accessible microscopy
to everyone.
YouTube·https://www.youtube.com
The Versatile OpenFlexure Microscope and its ... - YouTube
The open flexure project makes automated digital microscopes available
at what I'm going to call a manageable price.


reply anyway but couch your reply to explain any uncertainty you have
This reply couches information with an explanation of uncertainty,
based on the search results.
Home-buildability of OpenUC2 versus OpenFlexure
Uncertainty: While OpenUC2 is conceptually home-buildable, recent
commercialization efforts by the creators may lead to more friction
for hobbyists compared to the well-established home-build focus of
OpenFlexure.

Home-Build Focus: OpenFlexure has a very clear and long-standing focus
on being home-buildable using a 3D printer and off-the-shelf
components. All necessary files and detailed instructions are openly
available.
OpenUC2 Commercialization: The OpenUC2 project, while originating as a
university-based open-source effort, has since seen the formation of a
company (OpenUC2 GmbH) to commercialize the platform. This might mean
that while the design files remain free, some components or kits might
be pushed for purchase. One of the motivations for commercialization
was addressing issues with variable quality in 3D-printed parts and
sourcing, especially for universities, suggesting a commercial path is
being emphasized.
Modular vs. Monolithic Build: OpenUC2's modular, cube-based design is
fundamentally home-buildable by printing individual components.
However, the OpenFlexure's design is more monolithic, with parts like
the flexure stage often printing as a single piece. Both can be
3D-printed at home.

Precision of OpenUC2 versus OpenFlexure
Uncertainty: OpenFlexure's precision is consistently highlighted and
well-quantified, while OpenUC2's precision is highly dependent on the
components used in a given modular build.

OpenFlexure's Precision: The OpenFlexure is known for its "extremely
high positioning precision," with step sizes as low as 50 nm in the
z-axis, achieved through its unique 3D-printed flexure mechanism. This
precision is a key design feature and has been a major focus of the
project.
OpenUC2's Precision: OpenUC2's precision is more variable. The
platform itself provides the framework for building various stages,
and precision depends on the specific stepper motors, control
electronics, and stage design used. A UC2 micrometer stage, for
example, can achieve <1µm resolution. However, the precision is not an
inherent property of the modular blocks themselves, as it is with
OpenFlexure's flexure stage. The OpenFlexure's flexure stage has even
been adapted for use with UC2 cubes, indicating its superior inherent
mechanical precision.

Community interests and shared work
Uncertainty: Both communities engage in collaboration and sharing, but
their core interests reflect the fundamental differences in their
platforms' design philosophies.

OpenFlexure Community:

Focus: The community's interests center around automated microscopy,
precise positioning, machine learning applications, and field work.
Collaboration: The community actively collaborates on software
development, documentation, and hardware modifications.
Sharing: Discussions focus on topics like automated scanning, image
stitching, and robotic operation.

OpenUC2 Community:

Focus: Reflecting the platform's modularity, interests revolve around
"rapid prototyping in optics," education, and experimenting with
various optical methods.
Collaboration: The community emphasizes exchanging ideas and building
custom optical systems.
Sharing: Shared work often showcases innovative optical setups, such
as light-sheet or fluorescence microscopes, and the development of new
modules.

In summary, if your primary goal is robust, well-documented, automated
microscopy with high inherent precision, OpenFlexure is likely the
better choice. If you prioritize maximum modularity and experimenting
with many different optical configurations, OpenUC2 is a stronger
candidate, but with a potential for greater complexity and reliance on
purchased components.


what would be needed to make the openuc2 have the specs of the
openflexure and how much would it cost?
To upgrade an OpenUC2 to achieve the sub-micron precision of an
OpenFlexure, the most effective and direct method is to
integrate the OpenFlexure flexure stage into the OpenUC2 system. This
leverages the superior mechanical design of the OpenFlexure's stage
while retaining the modular optics and framework of the OpenUC2.
What would be needed
1. Integration hardware

3D-printed adapters: You will need to print special adapters to mount
the OpenFlexure stage and its actuators within the OpenUC2 modular
cube system. The OpenUC2 team has already designed and provided these
files on their GitHub repository.
Fasteners: A set of M3 screws, nuts, and washers would be needed to
secure the adapters.

2. Motion control hardware

OpenFlexure actuators: Three lead-screw actuators for X, Y, and Z
motion, which consist of M3 screws, nuts, and washers. Brass nuts are
often recommended for smoother motion.
Stepper motors: Three 28BYJ-48 micro stepper motors are used to drive
the actuators.
Motor controller: A Sangaboard or similar motor controller board is
needed to interface the stepper motors with a Raspberry Pi.

3. Integration software

OpenFlexure server: The OpenFlexure software is designed to manage the
motor control and provide a server for running automated scans. While
the UC2 platform has its own control software, using the OpenFlexure
software is ideal for leveraging the full capabilities of the stage.
Configuration: You would need to configure the OpenUC2 software or
write custom scripts to interface with the OpenFlexure server for
control.

Cost breakdown for a UC2 with Flexure Stage
The costs are an estimate and can vary based on your existing parts,
material costs, and sourcing.

OpenFlexure Flexure Stage Kit (without electronics): ~$20–$50

Includes: 3D-printed stage, fasteners, O-rings, and other small parts.

OpenFlexure Electronics Kit (for automation): ~$50–$100

Includes: Stepper motors, Sangaboard, wiring, and power supply.

Raspberry Pi: ~$40–$60

Needed to run the OpenFlexure server and control the stage.

High-Resolution Optics: ~$70–$100+

Includes a plan-achromatic objective lens and a tube lens.

Camera: ~$60

The Raspberry Pi High Quality Camera is a common choice.

Total estimated cost: $240–$370+ (excluding the core OpenUC2 cubes,
which you would already have).

Comparison: OpenFlexure vs. Upgraded OpenUC2
Feature OpenFlexure (Native)Upgraded OpenUC2 with Flexure Stage
PrecisionSub-micron. The precision is a fundamental feature of the
monolithic, 3D-printed flexure stage.Sub-micron. The integration of
the Flexure stage directly imports its precision into the modular UC2
framework.
ModularityModular but limited. The OpenFlexure platform has some
modularity, particularly in the optics module, but is less flexible
for rapid reconfiguration.Highly modular. You retain the full
modularity of the OpenUC2 system while gaining the precise motion of
the OpenFlexure stage.
Cost$200–$300 for a full automated kit.Higher. You pay for both the
initial UC2 platform and the OpenFlexure stage components, making the
total cost greater than a standard OpenFlexure build.
EffortStandard assembly. Requires following the OpenFlexure build
instructions.Integration effort. Requires additional steps to adapt
and integrate the two platforms, which may require some
troubleshooting.
For a hobbyist seeking both the modularity of OpenUC2 and the high
precision of OpenFlexure, integrating the OpenFlexure stage is the
most reliable path, despite the higher cost and effort required.


I am talking to you on google’s AI Mode. is there a way to export or
save locally the conversation history using a google page, service,
api, or anything?
Several methods are available to save or export conversation history
locally. These methods vary based on the interface used, such as the
public AI interface, Google AI Studio, or the Gemini API.

Public AI interface
Copy and paste
The simplest method is to copy the conversation text from the browser
and paste it into a local document or application. This process is
quick and works for any conversation size. However, some formatting
might be lost, and it can be manual for long chats.
Export to Google Docs
You can click the Share & export button and choose Export to Docs.
This saves the conversation directly to Google Drive as a new Google
Doc. This preserves formatting and integrates with a Google account.
The option is limited to exporting to a Google Doc.
Google Takeout
Google Takeout exports data from various Google products, including
some AI interactions. While it may not capture conversations with the
current AI model directly, it is the primary way to export data from
most Google services. This is a comprehensive method for getting an
archive of data. The export is often in a JSON format that requires
conversion to a more readable format, and it's not designed for quick,
per-chat exports.
Google AI Studio
"Get Code" feature
Within a chat in AI Studio, click the "<> (Get Code)" button to get
the code for the chat in formats like REST, which can then be copied.
This captures the full chat history, including the programmatic
structure. The raw code requires pasting into another prompt or a
different editor to be transformed into a readable format.
Google Drive storage
When you save a chat in AI Studio, it's stored in Google Drive as a
JSON file. You can find and download this file from Drive. This saves
a programmatic, complete record of the conversation. The format is not
easily readable without conversion, and re-uploading the file can
cause metadata issues.
Browser extensions
Chrome Web Store offers extensions like "Save my Chatbot" that allow
you to export conversations from popular AI assistants into formatted
Markdown files. This provides a formatted, readable local file. A
third-party extension requires trusting the conversation data, and its
functionality may change.
Gemini API
Programmatic saving
When interacting with the Gemini API, a client application can be
programmed to save the conversation history to a local file (e.g., as
JSON or Markdown). This provides full control over the format and
where the data is saved. It requires programming skills to implement.
"Save" feature in Gemini CLI
The Gemini command-line interface includes a temporary save feature
using the /chat save command, which saves the conversation history to
a temporary file. This is easy to use for temporary backups. The saved
file is temporary and not intended for long-term archiving.


I don’t see an option for drive when I select share. it just makes a link.
It appears the option to export to Google Docs is not directly
available for the full conversation in your interface, which may be
due to a change in the platform. Here are the most reliable
workarounds to save your conversation locally:
Option 1: The "Save As" HTML method (Best for preserving formatting)
This method saves a static, local copy of the conversation as a
webpage, preserving most formatting like bold text and code blocks.

Open the conversation in your browser (preferably Chrome).
Expand any collapsed sections by clicking the arrow icons so the
entire conversation is visible.
Right-click on the webpage and select Save As....
Choose a location on your computer and save the file. The default
format will be "Webpage, Complete," which saves the HTML file and its
associated images.

Option 2: The "Transcribe the chat" method (Best for direct export)
You can ask the AI to transcribe the conversation into a single
response, which you can then export.

In the current chat, enter a prompt like, "Please transcribe this
entire chat, including formatting, in one response".
Once the AI generates the full chat history in its response, click the
Share & exportbutton below that specific response.
The menu will offer an option to Export to Docs, which will create a
new Google Doc with the transcribed conversation.

Option 3: Use a browser extension (Best for frequent use)
Third-party extensions can streamline saving conversations if done regularly.

Search the Chrome Web Store for an extension like "Gemini to PDF" or
"ConvoSnap Exporter".
Install the extension and follow its instructions to export the
conversation. Many offer direct export to PDF, Markdown, or Google
Docs.
Note: Consider privacy when using third-party extensions, especially
for sensitive conversations.

Option 4: Manual copy and paste
Copy and paste the text directly for a quick solution.

Select the text of the conversation.
Copy it (Ctrl+C on Windows/Linux, Cmd+C on Mac).
Open a new text file or Google Doc and paste the content.

Why the "Export to Docs" button isn't visible for the whole chat
The platform's features evolve, and inconsistencies can occur. The
direct "Export to Docs" option may not be available for the full chat
history via the main share button. It is available when sharing an
individual response. The transcription method puts the entire history
into a single response, making it eligible for export.

Reply via email to