System integrations highlight: the AMD KRIA KR260 and Sony Camera Integration Tutorial


#techtutorials #techtips #techhacks #embeddedsystems #diyelectronics #edgeai #embedded #roboticsindustry #roboticsolutions
November 28, 2023


Combining the KR260 board's machine vision functionality with the Sony IMX547 Camera creates an impressive system for image processing tasks.

In this tutorial, we'll revisit the setup process as detailed in the original documentation, offering a step-by-step guide to streamline your installation
Installation of Ubuntu on KR260

Installation of Ubuntu on KR260

Start by downloading the official Ubuntu image for the KR260, which is available here.

Next, download Balena Etcher, which is the recommended software for flashing the image and is compatible with Windows, Linux, and macOS. Follow the on-screen instructions in Balena Etcher, choosing the Ubuntu image you downloaded to flash it onto your microSD card.

After your microSD card has been successfully flashed with the Ubuntu image, you can move on to the subsequent step.

Setting Up The Environment On KR260

After booting up the board, you'll need to establish a connection to your KR260. This can be done either using a USB cable or via SSH (for SSH, ensure you know the board's IP address on your network).

Once connected, run the following commands to set up your environment:

# install the basic configuration for KRIA App store
sudo snap install xlnx-config --classic --channel=2.x
sudo xlnx-config.sysinit
sudo add-apt-repository ppa:xilinx-apps
sudo add-apt-repository ppa:ubuntu-xilinx/sdk
sudo apt update
sudo apt upgrade

# install docker
sudo groupadd docker
sudo usermod -a -G docker $USER
sudo apt install docker

# install the docker app image
sudo apt install xrt-dkms
sudo xmutil getpkgs
sudo apt install xlnx-firmware-kr260-mv-camera
sudo docker pull xilinx/mv-defect-detect:2022.1

Install Additional Packages

To fully utilize GStreamer, additional packages are required. Execute the following commands to install these necessary packages:

# install Gstreamer packages
sudo apt-get install gstreamer1.0*
sudo apt install ubuntu-restricted-extras
sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev

# if you want to just download the compiled files of OpenCV for Python
mkdir -p opencv/opencv-python-master
cd opencv/opencv-python-master

# If you want to compile OpenCV on the KR260
mkdir $TMPDIR
# Build and install OpenCV from source.
cd "${TMPDIR}"
git clone --branch ${OPENCV_VER} --depth 1 --recurse-submodules --shallow-submodules opencv-python-${OPENCV_VER}
cd opencv-python-${OPENCV_VER}
# we want GStreamer support enabled.
# generate the wheel package
python3 -m pip wheel . --verbose

Create The Setup Docker File

To streamline the process and avoid repeating a series of commands after each docker run execution, you can create a setup script for Docker. Here's how you can create a file named in the /home/ubuntu directory:

media-ctl -d /dev/media0 -V "\"imx547 7-001a\":0 [fmt:SRGGB10_1X10/1920x1080 field:none @1/60]"
modetest -D fd4a0000.display -s 43@41:1920x1080-60@BG24 -w 40:"alpha":0
modetest -D fd4a0000.display -s 43@41:1920x1080-60@BG24 -w 40:"g_alpha_en":0
cd opencv/opencv-python-master
python3 -m pip install opencv_python*.whl
Great! You're set to begin!

Launching The Docker Container On KR260

Setup the AMD Xilinx application:

sudo xmutil desktop_disable
sudo xmutil unloadapp
sudo xmutil loadapp kr260-mv-camera
Launch the Docker container with the following commands:
sudo docker run \
--env="DISPLAY" \
--net=host \
--privileged \
--volume /tmp:/tmp \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /dev:/dev \
-v /sys:/sys \
-v /etc/vart.conf:/etc/vart.conf \
-v /lib/firmware/xilinx:/lib/firmware/xilinx \
-v /run:/run \
-v /home/ubuntu:/home \
-h "xlnx-docker" \
-it xilinx/mv-defect-detect:2022.1 \
bash -c "cd /home; chmod 777 ./; ./; bash"
If the terminal "hangs" on this warning:
just press "Enter" and the terminal will continue.

Verifying Camera Functionality

We can test the output of the camera directly using the Display Port of the KR260:

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=5 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! perf ! kmssink bus-id=fd4a0000.display -v

Or, to verify that the camera is working, use the following command to capture an image:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! queue ! videoconvert ! jpegenc ! filesink location=output.jpeg

This GStreamer pipeline is designed to capture video from a V4L2 (Video for Linux 2) device located at /dev/video0, processing it through these stages:

v4l2src: Retrieves video frames from the defined V4L2 device (/dev/video0).
video/x-raw: Defines the video as a raw format, meaning no compression or alterations are applied.
width=1920, height=1080, format=GRAY8, framerate=60/1: Configures the video's width and height to 1920x1080 pixels, sets the format to GRAY8 (grayscale), and establishes a frame rate of 60 frames per second.
queue: Implements a frame buffer in a queue to facilitate smooth video processing.
videoconvert: Transforms the video format as necessary for compatibility with subsequent elements in the pipeline.
jpegenc: Translates the video frames into JPEG format.
filesink location=output.jpeg: Outputs and saves the processed JPEG frames to a file named "output.jpeg."

Development with KR260 and Machine Vision Camera

Utilizing GStreamer Pipeline with VART
When developing with the KR260 and a machine vision camera, you can harness the flexibility and power of both GStreamer and OpenCV. To effectively use GStreamer in this setup, it's recommended to integrate the VART (Versatile AiRT) library.
In order to simplify the tests, we can use OpenCV as a "frontend" and GStreamer as a "backend". In this way, we can simplify the control flow in an explicit way.

Using C++, we have:

cv::VideoCapture videoReceiver("v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink", cv::CAP_GSTREAMER);

This line of code initializes a `cv::VideoCapture` object in C++ using the OpenCV library, specifically for GStreamer-based video capture. Let's break down the parameters:

"v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink": This is a GStreamer pipeline specified as a string. It describes the series of actions to capture video from a V4L2 device (`/dev/video0`), set its properties (width, height, format, and framerate), process it through various GStreamer elements (queue, videoconvert), and finally, output it through an appsink.
cv::CAP_GSTREAMER: Specifies the backend to be used for capturing video, in this case, GStreamer.

Using Python, we have:

pipeline = "v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink" cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)

Achieving similar outcomes as the C++ code but with greater simplicity can often be done using Python. Python's straightforward syntax and extensive libraries make it an excellent choice for developing a simple frame-grabber software.

So, let's try to implement in Python a simple frame-grabber software:

import cv2

input_pipeline = "v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink"
cap = cv2.VideoCapture(input_pipeline, cv2.CAP_GSTREAMER)

output_pipeline = "appsrc ! kmssink bus-id=fd4a0000.display"
out = cv2.VideoWriter(output_pipeline, 0, 60, (1920, 1080), isColor=False)

i = 1000
while i > 0:
ret, frame =
i -= 1


On your monitor connected to the KR260, you should now be able to view the video frames. However, if the playback appears slow, this could be due to the substantial data transfer load between the Programmable Logic (PL) and the Processing System (PS) of the KR260.

To achieve smoother video playback, consider modifying the code as follows:

import cv2
import time

input_pipeline = "v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! queue ! appsink"
cap = cv2.VideoCapture(input_pipeline, cv2.CAP_GSTREAMER)

output_pipeline = "appsrc ! queue ! kmssink bus-id=fd4a0000.display"
out = cv2.VideoWriter(output_pipeline, 0, 60, (1920, 1080), isColor=False)

i = 1000
frame_in = []
while i > 0:
ret, frame =
i -= 1

i = 1000
while i > 0:
i -= 1


In this way, first, we grab every frame, and then we forward all the frames to the Display Port.


This guide offers a detailed walkthrough for configuring the KR260 board with a Machine Video Camera, utilizing Docker containers and GStreamer. It lays the groundwork for advanced development, encompassing both GStreamer pipelines and OpenCV workflows. Dive into experimenting and incorporating this potent duo in your machine vision endeavors to amplify your image processing prowess.

Enrico Giordano holds the dual roles of CEO and CTO at Makarenalabs, showcasing his diverse skill set as both a developer and designer. He boasts an academic background in Computer Science, having graduated and then further honed his expertise in Embedded Systems at the University of Verona in 2017. His academic journey was marked by multiple scholarships at the University of Verona, underscoring his excellence in the field. Enrico's professional career has been deeply intertwined with MakarenaLabs, where he has been contributing since 2016.

Contact us for more System integrations solutions that help to grow your business