![]() ![]() Hi, the following will work: gst-launch-1.0 -gst-debug=v4l2videodec:5 rtspsrc location=$RTSP_STREAM protocols=tcp latency=1000 drop-on-latency=1 timeout=5000000 ! rtph264depay ! h264parse ! nvv4l2decoder cudadec-memtype=2 num-extra-surfaces=1 ! queue leaky=2 max-size-buffers=1 ! nvvideoconvert nvbuf-memory-type=3 output-buffers=1 ! capsfilter caps=video/x-raw,format=RGBA ! fakesink I already shared the rtsp video stream privately with. To reproduce the issue you can use: gst-launch-1.0 uridecodebin uri=xxx ! filesink location='test-nvv4l2decoder.mp4 Therefore, I assume there must be a way to decode them using NVDEC. All the video streams used for testing can be decoded fine with OpenCV, or using a pipeline on the CPU (see file attached here ). My goal is to develop a pipeline that can decode any rtsp video using NVDEC chip. However, uridecodebin cannot decode other streams that were previously working fine with my pipeline. The solution, proposed here, was to use the uridecodebin to decode the specific camera. However, I found a camera that could not be decoded by my pipeline (see file attached to that thread for further info): thanks to the help of Nvidia forum, it has been discovered that the camera sent timestamp in a format that could not be digested by nvv4l2decoder. My custom pipeline worked well with the majority of cameras. I am working on a pipeline to run AI on multiple live cameras using Deepstream 6.0.1, a Tesla T4 and the Deepstream Python APIs. This stream can be decoded with VLC, ffplay, and OpenCV. I am working on a Deepstream pipeline but I have troubles decoding an RTSP video stream. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |