我是靠谱客的博主 可耐小鸭子,最近开发中收集的这篇文章主要介绍使用Gstreamer+OpenCV实现两路图像数据混合拉流推流,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

本示例看完后,可基本掌握以下内容
1、利用opencv+gstreamer拉流推流的基本步骤
2、可学习gstreamer,图像混合的一些插件用法
3、TX2 NX上的视频编解码与硬件加速
4、linux下如何提高线程优先级

我需要实现的功能是在TX2 NX,拉取摄像头数据,并再摄像头数据上与另一张图片混合
混合后推流到rtsp服务器。

由于混合的时候需要保留透明度,但是OpenCV不支持四通道的数据写入。硬件加速混合插件无法剔除透明度。当然可以用内存中剔除指定颜色的插件实现,但效率太低

所以只能利用VideoCapture先拉到摄像头数据,利用要混合的图片,手动计算叠加。

叠加后利用VideoWriter推流

为提高效率,通过将线程绑定在固定CPU上,提高了线程优先级。

具体流程:
首先搭建rtsp服务端,可利用rtsp-simple-serve这个项目在局域网部署一套
 
 安装gstreamer后
 由于用到了rtspclientsink插件,所以需要安装rtsp插件
// sudo apt-get update
// sudo apt-get upgrade
// sudo apt install gstreamer1.0-rtsp

下面是利用VideoWriter推流的过程

//cpp 实现
#include "rtsp_push_stream.h"

RtspPushStream::RtspPushStream() : active_(true) {}

RtspPushStream::~RtspPushStream() {}
void RtspPushStream::start() {
 
    string appsrcpipeline =
        "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! nvv4l2h264enc "
        "! h264parse ! qtmux ! filesink location={filename}  sync=false";

    //使用 rtspclientsink 需要安装插件
    // sudo apt-get update
    // sudo apt-get upgrade
    // sudo apt install gstreamer1.0-rtsp

 
    std::string pipeline_useglvideomixer =
        "appsrc "
        "! video/x-raw, format=BGR "
        "! videoconvert "
        "! video/x-raw,format=(string)RGBA , width=(int)1024, height=(int)600"
        "! queue2"
        "! alpha method=blue "
        "! glvideomixer name = compos sink_0::zorder=1 sink_0::alpha=0.85 sink_1::alpha=1 sink_1::zorder=0 "
        "sink_1::width=1024 sink_1::height=600 "
        "! nvvidconv"
        "! video/x-raw(memory:NVMM), format=(string)I420, width=(int)1024, height=(int)600"
        "! nvv4l2h264enc "
        "! rtspclientsink location=rtsp://192.168.20.99:8554/my_pipeline"
        " nvarguscamerasrc"
        "! video/x-raw(memory:NVMM),format=(string)NV12, width=(int)1640, height=(int)1232, framerate=(fraction)25/1"
        "! queue2"
        "! nvvidconv left=0 right=1640 top=136 bottom=1096 "
        "! compos. ";

    // nvcompsositor 的两个输入必须是一样的图片格式和内存形式,需要将云图透明部分用白色(255,255,255)填充
    std::string pipeline_nvcompsositor =
        "appsrc "
        "! video/x-raw, format=BGR "
        "! videoconvert "
        "! video/x-raw,format=(string)RGBA, width=(int)1024, height=(int)600"
        "! nvvidconv "
        "! queue2"
        "! nvcompositor  name = compos sink_0::zorder=1 sink_0::alpha=0.5 "
        "sink_1::alpha=1 "
        "sink_1::zorder=0 sink_1::width=1024 sink_1::height=600 "
        "! nvvidconv "
        "! nvv4l2h264enc "
        "! rtspclientsink location=rtsp://192.168.20.99:8554/my_pipeline"
        " nvarguscamerasrc "
        "! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1640, height=(int)1232,framerate=(fraction)25/1 "
        "! nvvidconv  left=0 right=1640 top=136 bottom=1096 "
        "! video/x-raw,format=(string)RGBA, width=(int)1024, height=(int)600 "
        "! videobalance brightness=0.3 "
        "! nvvidconv "
        "! queue2"
        "! compos. ";

    video_writer_.open(pipeline_nvcompsositor, cv::CAP_GSTREAMER, 0, 25, cv::Size(1024, 600));
    mat_          = cv::imread("test.jpg");
    write_thread_ = make_shared<thread>(&RtspPushStream::run, this);
    write_thread_->join();
}

void RtspPushStream::run() {
    int id = 0;
    while (active_) {
        space_mat_ = cv::Mat(600, 1024, CV_8UC3, cv::Scalar(0, 0, 0));

        
        video_writer_.write(space_mat_);

        std::this_thread::sleep_for(std::chrono::milliseconds(3));
    }
}

void RtspPushStream::end() {
    active_ = false;
    //LOG::info("RtspPushStream::end()!");
}

void RtspPushStream::start_capture() {

    std::string pipeline_camear_capture =
        " nvarguscamerasrc "
        "! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1640, "
        "height=(int)1232,framerate=(fraction)30/1 "
        "! nvvidconv  left=0 right=1640 top=136 bottom=1096 "
        "! video/x-raw,format=(string)I420, width=(int)1024, height=(int)600 "
        "! videoconvert "
        "! video/x-raw,format=(string)BGR "
        "! appsink";

    video_capture_.open(pipeline_camear_capture, cv::CAP_GSTREAMER);
    if (!video_capture_.isOpened()) {
        //LOG::error("Failed to open VideoCapture");
        return;
    }
    mat_ = cv::imread("test.jpg");
    std::string pipeline_video_writer =
        "appsrc "
        "! video/x-raw, format=BGR "
        "! videoconvert "
        "! nvvidconv "
        "! nvv4l2h264enc "
        "! rtspclientsink location=rtsp://192.168.20.99:8554/my_pipeline";

    video_writer_.open(pipeline_video_writer, cv::CAP_GSTREAMER, 0, 20, cv::Size(1024, 600));
    cap_thread_ = make_shared<thread>(&RtspPushStream::run_capture, this);
    cap_thread_->detach();
}

void RtspPushStream::run_capture() {
    bool use_affinity = true;
    int  cpu          = 1;
    if (use_affinity) {
        cpu_set_t cpuset;
        CPU_ZERO(&cpuset);
        CPU_SET(cpu, &cpuset);

        pthread_t current_thread = pthread_self();
        int       ret            = pthread_setaffinity_np(current_thread, sizeof(cpu_set_t), &cpuset);
        if (ret != 0) {
            //LOG::error("run_capture pthread_setaffinity_np failed, {}", ret);
        } else {
            //LOG::info("run_capture pthread_setaffinity_np success, bind recv thread to cpu 1");
        }
    }

    int     id = 0;
    cv::Mat src;
    while (active_) {
        video_capture_ >> src;
        //在这里自定义融合算法就行了
        //可以利用opencv给src写入当前时间用来观察
        video_writer_.write(src);
        src.release();
    }

    //LOG::info("RtspPushStream::run_capture() end!");
}

//头文件
#ifndef RTSHPUSHSTREAM_H
#define RTSHPUSHSTREAM_H
#include <chrono>
#include <iostream>
#include <memory>
#include <mutex>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/videoio/videoio.hpp>
#include <string>
#include <thread>
#include <vector>

 
using namespace std;
class RtspPushStream {
public:
    RtspPushStream();
    ~RtspPushStream();

    void start();
    void write_image(cv::Mat image);

    void run();
    void start_capture();
    void run_capture();
    void end();

private:
    mutex                mutex_;
    cv::VideoWriter      video_writer_;
    list<cv::Mat>        img_mats_;
    bool                 active_;
    cv::Mat              mat_;
    cv::Mat              space_mat_;
    shared_ptr<thread>   cap_thread_;
    shared_ptr<thread>   write_thread_;
    cv::VideoCapture     video_capture_;
    std::vector<cv::Mat> push_frames_;
    std::mutex           push_frames_lock_;
};
#endif


推流后用相关视频软件拉流就能看到效果,如PotPlayer 64 bit,VLC等等

最后

以上就是可耐小鸭子为你收集整理的使用Gstreamer+OpenCV实现两路图像数据混合拉流推流的全部内容,希望文章能够帮你解决使用Gstreamer+OpenCV实现两路图像数据混合拉流推流所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(62)

评论列表共有 0 条评论

立即
投稿
返回
顶部