Computer Vision

Image and video processing

Background subtraction

Background subtraction is a common and widely used technique for generating a foreground mask by using static cameras. In controlled environments, motion detection can be separated from static background using statistical modeling. Instead of running a full object detector, we model the background over time and isolate dynamic foreground regions.

TECHNIQUES

  • Frame differencing

  • Running average background

  • Gaussian Mixture Models (MOG2)

  • Lightweight

  • Works without deep learning

Interactive segmentation

The Goal of interactive segmentation is to cut out the foreground object in photo, based on some input from the user. More precisely, we aim to associate a binary level to each of the pixels in the image, indicating whether this pixel belongs to the foreground or background, based on the observer RGB data at each pixel. Automatic object detection from Meta's SAM is revolutionized segmentation because this model was trained on billions of masks. Segmentation can be used with a point click, multiple clicks, drawing a bounding box or a text prompt.

AREAS OF INTEREST

  • Product isolation

  • Medical imaging

  • Industrial inspection

  • Dataset preparation

  • Manual bounding boxes

  • Iterative refinement

Stereo Vision

Stereo vision systems provide depth perception and spatial information enabling to compute the distance between objects based on their disparity. Stereo vision estimates depth (3D structure).

AREAS OF INTEREST

  • Robotic grasping

  • Robotic grasping

  • Bin picking

  • Autonomous navigation

  • Pallet dimension measurement

  • volume estimation

  • 3D estimation

Image rearrangement

Depending on the application, this may involve: changing image dimensions, relocating objects, removing objects, filling missing regions. Unlike simple pixel manipulation, Random Field approaches model spatial dependencies between neighboring pixels. This ensures that textures, edges, and structures remain coherent after transformation.

AREAS OF INTEREST

  • Image stitching

  • Industrial part alignment

  • Object replacement

  • Augmented reality overlays

  • Document correction

  • Template matching

Texture Synthesis

Texture synthesis focuses on learning a generative model from a small sample of texture so that newly generated samples appear as natural extensions of the original pattern. By preserving these local relationships, the generated texture maintains visual consistency.

AREAS OF INTEREST

  • Surface and material simulation

  • Synthetic dataset generation

  • Texture-based defect modeling

  • Pattern expansion

Laplacian of Gaussian

The Laplacian of Gaussian (LoG) is a classical edge and blob detection technique that highlights regions where image intensity changes rapidly. It works by first smoothing the image with a Gaussian filter (to reduce noise), then applying the Laplacian operator (second derivative) to detect zero-crossings that indicate edges or blob-like structures.

AREAS OF INTEREST

  • Zero-crossing detection

  • Multi-scale space detection

  • Noise sensitivity

  • Smoothing necessity

  • Edge smoothness

Canny & Harris Edge detection

Classical machine vision method and reliable feature extraction before deep learning. Canny Edge Detection identifies strong structural boundaries using gradient analysis, non-maximum suppression, and hysteresis thresholding — producing clean, connected edges. Harris Corner Detection identifies stable interest points where intensity changes in multiple directions — enabling tracking, motion estimation, and geometric reconstruction.

TECHNIQUES

  • Industrial inspection

  • Structure zoom-in

  • Zero-crossing detection

  • Gradient magnitude & direction

  • Structure tensor

  • Corner response function

Video inspection with hardware

I build real-time inspection and tracking systems designed for unstable environments. Vision and LLM models analyze data and trigger actions via APIs, sensors, and industrial workflows.

From Vision to Orchestration

Cameras & Sensors

Vision Models

Decision Logic

Automations & Agents

Alerts & Dashboards

Cameras & Sensors

Vision Models

Decision Logic

Automations & Agents

Alerts & Dashboards

Cameras & Sensors

Vision Models

Decision Logic

Automations & Agents

Alerts & Dashboards

How Vision Projects Begin

Let's begin a system that fits your environment, constraints and goals.

1

Target Assesment

2

Data Capture

3

Feasibilty Testing

4

Long Term Stability

1

Target Assesment

2

Data Capture

3

Feasibilty Testing

4

Long Term Stability

1

Target Assesment

2

Data Capture

3

Feasibilty Testing

4

Long Term Stability

What best describes your role?

Select the option that best reflects your perspective — we’ll adapt the questions and recommendations accordingly.

  • The Operational manager

    I run day-to-day operations — warehouses, production, stores, or fulfillment.
    My focus is throughput, quality, labor efficiency, and reliability.

  • The Innovation Leader

    I’m responsible for systems, architecture, or digital initiatives.
    I’ve seen pilots that don’t scale and tools that lock us into vendors.

  • The CFO

    I control the budget and need to justify AI spend with ROI.
    If we invest in AI or automation, I need to see the logic — cost, impact, and trade-offs — clearly and realistically.

  • The Operational manager

    I run day-to-day operations — warehouses, production, stores, or fulfillment.
    My focus is throughput, quality, labor efficiency, and reliability.

  • The Innovation Leader

    I’m responsible for systems, architecture, or digital initiatives.
    I’ve seen pilots that don’t scale and tools that lock us into vendors.

  • The CFO

    I control the budget and need to justify AI spend with ROI.
    If we invest in AI or automation, I need to see the logic — cost, impact, and trade-offs — clearly and realistically.

  • The Operational manager

    I run day-to-day operations — warehouses, production, stores, or fulfillment.
    My focus is throughput, quality, labor efficiency, and reliability.

  • The Innovation Leader

    I’m responsible for systems, architecture, or digital initiatives.
    I’ve seen pilots that don’t scale and tools that lock us into vendors.

  • The CFO

    I control the budget and need to justify AI spend with ROI.
    If we invest in AI or automation, I need to see the logic — cost, impact, and trade-offs — clearly and realistically.

Video inspection with hardware

I build real-time inspection and tracking systems designed for unstable environments. Vision and LLM models analyze data and trigger actions via APIs, sensors, and industrial workflows.

From Vision to Orchestration

Cameras & Sensors

Vision Models

Decision Logic

Automations & Agents

Alerts & Dashboards

How Vision Projects Begin

Let's begin a system that fits your environment, constraints and goals.

1

Target Assesment

2

Data Capture

3

Feasibilty Testing

4

Long Term Stability

What best describes your role?

Select the option that best reflects your perspective — we’ll adapt the questions and recommendations accordingly.

  • The Operational manager

    I run day-to-day operations — warehouses, production, stores, or fulfillment.
    My focus is throughput, quality, labor efficiency, and reliability.

  • The Innovation Leader

    I’m responsible for systems, architecture, or digital initiatives.
    I’ve seen pilots that don’t scale and tools that lock us into vendors.

  • The CFO

    I control the budget and need to justify AI spend with ROI.
    If we invest in AI or automation, I need to see the logic — cost, impact, and trade-offs — clearly and realistically.