System integrations highlight: the AMD KRIA KR260 and Sony Camera Integration Tutorial

ENHANCE IMAGE PROCESSING

#techtutorials #techtips #techhacks #embeddedsystems #diyelectronics #edgeai #embedded #roboticsindustry #roboticsolutions
November 28, 2023

Introduction

Combining the KR260 board's machine vision functionality with the Sony IMX547 Camera creates an impressive system for image processing tasks.

In this tutorial, we'll revisit the setup process as detailed in the original documentation, offering a step-by-step guide to streamline your installation
Installation of Ubuntu on KR260

Installation of Ubuntu on KR260

Start by downloading the official Ubuntu image for the KR260, which is available here.

Next, download Balena Etcher, which is the recommended software for flashing the image and is compatible with Windows, Linux, and macOS. Follow the on-screen instructions in Balena Etcher, choosing the Ubuntu image you downloaded to flash it onto your microSD card.

After your microSD card has been successfully flashed with the Ubuntu image, you can move on to the subsequent step.

Setting Up The Environment On KR260

After booting up the board, you'll need to establish a connection to your KR260. This can be done either using a USB cable or via SSH (for SSH, ensure you know the board's IP address on your network).

Once connected, run the following commands to set up your environment:


# install the basic configuration for KRIA App store
sudo snap install xlnx-config --classic --channel=2.x
sudo xlnx-config.sysinit
sudo add-apt-repository ppa:xilinx-apps
sudo add-apt-repository ppa:ubuntu-xilinx/sdk
sudo apt update
sudo apt upgrade

# install docker
sudo groupadd docker
sudo usermod -a -G docker $USER
sudo apt install docker

# install the docker app image
sudo apt install xrt-dkms
sudo xmutil getpkgs
sudo apt install xlnx-firmware-kr260-mv-camera
sudo docker pull xilinx/mv-defect-detect:2022.1

Install Additional Packages

To fully utilize GStreamer, additional packages are required. Execute the following commands to install these necessary packages:

# install Gstreamer packages
sudo apt-get install gstreamer1.0*
sudo apt install ubuntu-restricted-extras
sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev

# if you want to just download the compiled files of OpenCV for Python
mkdir -p opencv/opencv-python-master
cd opencv/opencv-python-master
wget https://s3.eu-west-1.wasabisys.com/xilinx/kr260/numpy-1.26.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
wget https://s3.eu-west-1.wasabisys.com/xilinx/kr260/opencv_python_headless-4.6.0%2B4638ce5-cp310-cp310-linux_aarch64.whl

# If you want to compile OpenCV on the KR260
OPENCV_VER="master"
TMPDIR=opencv
mkdir $TMPDIR
# Build and install OpenCV from source.
cd "${TMPDIR}"
git clone --branch ${OPENCV_VER} --depth 1 --recurse-submodules --shallow-submodules https://github.com/opencv/opencv-python.git opencv-python-${OPENCV_VER}
cd opencv-python-${OPENCV_VER}
export ENABLE_CONTRIB=0
export ENABLE_HEADLESS=1
# we want GStreamer support enabled.
export CMAKE_ARGS="-DWITH_GSTREAMER=ON"
# generate the wheel package
python3 -m pip wheel . --verbose

Create The Setup Docker File

To streamline the process and avoid repeating a series of commands after each docker run execution, you can create a setup script for Docker. Here's how you can create a file named setup_docker.sh in the /home/ubuntu directory:

media-ctl -d /dev/media0 -V "\"imx547 7-001a\":0 [fmt:SRGGB10_1X10/1920x1080 field:none @1/60]"
modetest -D fd4a0000.display -s 43@41:1920x1080-60@BG24 -w 40:"alpha":0
modetest -D fd4a0000.display -s 43@41:1920x1080-60@BG24 -w 40:"g_alpha_en":0
cd opencv/opencv-python-master
python3 -m pip install opencv_python*.whl
Great! You're set to begin!

Launching The Docker Container On KR260

Setup the AMD Xilinx application:


sudo xmutil desktop_disable
sudo xmutil unloadapp
sudo xmutil loadapp kr260-mv-camera
Launch the Docker container with the following commands:
 
sudo docker run \
--env="DISPLAY" \
--env="XDG_SESSION_TYPE" \
--net=host \
--privileged \
--volume /tmp:/tmp \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /dev:/dev \
-v /sys:/sys \
-v /etc/vart.conf:/etc/vart.conf \
-v /lib/firmware/xilinx:/lib/firmware/xilinx \
-v /run:/run \
-v /home/ubuntu:/home \
-h "xlnx-docker" \
-it xilinx/mv-defect-detect:2022.1 \
bash -c "cd /home; chmod 777 ./setup_docker.sh; ./setup_docker.sh; bash"
If the terminal "hangs" on this warning:
just press "Enter" and the terminal will continue.

Verifying Camera Functionality

We can test the output of the camera directly using the Display Port of the KR260:

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=5 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! perf ! kmssink bus-id=fd4a0000.display -v

Or, to verify that the camera is working, use the following command to capture an image:

 
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! queue ! videoconvert ! jpegenc ! filesink location=output.jpeg


This GStreamer pipeline is designed to capture video from a V4L2 (Video for Linux 2) device located at /dev/video0, processing it through these stages:

v4l2src: Retrieves video frames from the defined V4L2 device (/dev/video0).
video/x-raw: Defines the video as a raw format, meaning no compression or alterations are applied.
width=1920, height=1080, format=GRAY8, framerate=60/1: Configures the video's width and height to 1920x1080 pixels, sets the format to GRAY8 (grayscale), and establishes a frame rate of 60 frames per second.
queue: Implements a frame buffer in a queue to facilitate smooth video processing.
videoconvert: Transforms the video format as necessary for compatibility with subsequent elements in the pipeline.
jpegenc: Translates the video frames into JPEG format.
filesink location=output.jpeg: Outputs and saves the processed JPEG frames to a file named "output.jpeg."

Development with KR260 and Machine Vision Camera

Utilizing GStreamer Pipeline with VART
When developing with the KR260 and a machine vision camera, you can harness the flexibility and power of both GStreamer and OpenCV. To effectively use GStreamer in this setup, it's recommended to integrate the VART (Versatile AiRT) library.
In order to simplify the tests, we can use OpenCV as a "frontend" and GStreamer as a "backend". In this way, we can simplify the control flow in an explicit way.

Using C++, we have:

cv::VideoCapture videoReceiver("v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink", cv::CAP_GSTREAMER);


This line of code initializes a `cv::VideoCapture` object in C++ using the OpenCV library, specifically for GStreamer-based video capture. Let's break down the parameters:

"v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink": This is a GStreamer pipeline specified as a string. It describes the series of actions to capture video from a V4L2 device (`/dev/video0`), set its properties (width, height, format, and framerate), process it through various GStreamer elements (queue, videoconvert), and finally, output it through an appsink.
cv::CAP_GSTREAMER: Specifies the backend to be used for capturing video, in this case, GStreamer.


Using Python, we have:

pipeline = "v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink" cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)

Achieving similar outcomes as the C++ code but with greater simplicity can often be done using Python. Python's straightforward syntax and extensive libraries make it an excellent choice for developing a simple frame-grabber software.


So, let's try to implement in Python a simple frame-grabber software:

import cv2

input_pipeline = "v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! appsink"
cap = cv2.VideoCapture(input_pipeline, cv2.CAP_GSTREAMER)

output_pipeline = "appsrc ! kmssink bus-id=fd4a0000.display"
out = cv2.VideoWriter(output_pipeline, 0, 60, (1920, 1080), isColor=False)

i = 1000
while i > 0:
ret, frame = cap.read()
out.write(frame)
i -= 1

cap.release()
out.release()



On your monitor connected to the KR260, you should now be able to view the video frames. However, if the playback appears slow, this could be due to the substantial data transfer load between the Programmable Logic (PL) and the Processing System (PS) of the KR260.

To achieve smoother video playback, consider modifying the code as follows:


import cv2
import time

input_pipeline = "v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=GRAY8, framerate=60/1 ! queue ! appsink"
cap = cv2.VideoCapture(input_pipeline, cv2.CAP_GSTREAMER)

output_pipeline = "appsrc ! queue ! kmssink bus-id=fd4a0000.display"
out = cv2.VideoWriter(output_pipeline, 0, 60, (1920, 1080), isColor=False)

i = 1000
frame_in = []
while i > 0:
ret, frame = cap.read()
frame_in.append(frame)
i -= 1

i = 1000
while i > 0:
out.write(frame_in.pop())
i -= 1

cap.release()
out.release()

In this way, first, we grab every frame, and then we forward all the frames to the Display Port.

Conclusion

This guide offers a detailed walkthrough for configuring the KR260 board with a Machine Video Camera, utilizing Docker containers and GStreamer. It lays the groundwork for advanced development, encompassing both GStreamer pipelines and OpenCV workflows. Dive into experimenting and incorporating this potent duo in your machine vision endeavors to amplify your image processing prowess.



THE AUTHOR
Enrico Giordano holds the dual roles of CEO and CTO at Makarenalabs, showcasing his diverse skill set as both a developer and designer. He boasts an academic background in Computer Science, having graduated and then further honed his expertise in Embedded Systems at the University of Verona in 2017. His academic journey was marked by multiple scholarships at the University of Verona, underscoring his excellence in the field. Enrico's professional career has been deeply intertwined with MakarenaLabs, where he has been contributing since 2016.
ENHANCE IMAGE PROCESSING

Contact us for more System integrations solutions that help to grow your business