DLC live performance (frames per second)? - Usage & Issues ...
Format: markdownScore: 30Link: https://forum.image.sc # DLC live performance (frames per second)?
## Category: Usage & Issues
---
### Original Post by [RaymondA](https://forum.image.sc/u/RaymondA) (August 16, 2021)
Hi everyone,
I am encountering a problem with DLC-live, where it appears that fewer frames are being processed than expected. According to the following article [https://elifesciences.org/articles/61909](https://elifesciences.org/articles/61909), I should have no issues processing 100+ frames at 320x240 resolution when using a 1080Ti and MobileNet.
However, I have only been getting ~65 frames per second using my own method of calculating frames which may not be accurate. I have a counter in the `dlclivegui.py` script located in the `display_frame` function. As I have made the assumption this function runs for every frame, and is the best place to grab the location of the labels (data I am really after).
Is there an internal frame rate calculation I could be using instead?
and should I be accessing the data for the labelled points somewhere else?
The most frustrating part is that it seems no matter what changes I make to the network, augmentation method, or number of labels the amount of frames per second does not change. The only factor that has changed my frame rate value so far is the resolution of the video being analyzed.
I have been trying to find the problem for quite some time now and was hoping to get some new ideas.
I have 2 setups both with DeepLabCut and DLC-live installed, each computer has different hardware and software but they both under perform, newest setup with a 3080, cuda 11.2 and tensorflow-gpu 2.5 only analyzes 70 frames per second so something is definitely wrong. If anyone has any ideas about what the source could be or how to troubleshoot this problem that would be greatly appreciated.
Here are the DLC-live logs from when I initialize either of my DeepLabCut models.


---
### Comment by [RaymondA](https://forum.image.sc/u/RaymondA) (August 22, 2021)
> should I be accessing the data for the labelled points somewhere else?
So I looked into it and the answer to this question was yes.
As far as I can tell the `display_frame` method does not run for every frame processed by DLC-live as I had assumed. So I am now pulling the label data from inside the `pose_process` script, `_pose_loop` method as shown in this post.
This has resolved my issue and I now get much closer to the expected frames per second.
---
### Comment by [MWMathis](https://forum.image.sc/u/MWMathis) (August 23, 2021)
Hi [@RaymondA](https://forum.image.sc/u/RaymondA) - indeed, the frame size is the only thing that affects the speed (aside from backbone choice), as we outline in the eLife paper :).
I would use the speed testing code we provide to get the FPS; you likely are slowing it down with your counter. Did you use our GUI to run DLC-live? [GitHub - DeepLabCut/DeepLabCut-live-GUI: GUI to run DeepLabCut on live video feed](https://github.com/DeepLabCut/DeepLabCut-live-GUI)
---
### Comment by [RaymondA](https://forum.image.sc/u/RaymondA) (August 24, 2021)
Hello [@MWMathis](https://forum.image.sc/u/MWMathis),
You’re right I shouldn’t have expected changes from altering those settings, but I was getting exactly the same performance from ResNet and various MobileNet models which I should have made more clear, as that was the key giveaway something else was the issue and DLC-live was operating fine.
I haven’t managed to locate the speed testing code or find it in the documentation but I’ll keep looking as this will be the best way to test moving forward so thanks for letting me know about it.
All the modifications I have made to DLC-live are shown in the linked post, the FPS calculation is done in Unity which I didn’t mention earlier to keep the post more on topic. This calculation is done based on how often it receives a packet (label x and y values) from DLC-live, as a packet should be sent every time a frame is processed I thought it was unlikely to be the source of the problem. Now that this code is located in the `_pose_process` method there shouldn’t be any more issues, unless you know of a better location to access the label data.
And yes, I have been using the GUI for both DLC and DLC-live to do everything, so far it has been great and a much better alternative to the previous software.
---
### Comment by [MWMathis](https://forum.image.sc/u/MWMathis) (August 26, 2021)
Here you go! [GitHub - DeepLabCut/DLC-inferencespeed-benchmark: A database of inference speed benchmark results](https://github.com/DeepLabCut/DLC-inferencespeed-benchmark)
---
### Comment by [nishata24](https://forum.image.sc/u/nishata24) (November 1, 2022)
Hi, I am an undergraduate student trying to use DeepLabCut-Live to conduct pose estimation on a video in real-time. I have created a python script to use DeepLabCut-Live; however, I am running into an error while running this script. Below is the error:
```
2022-11-01 14:26:26.812311: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
...
cv2.error: OpenCV(4.6.0) /io/opencv/modules/videoio/src/cap_ffmpeg.cpp:192: error: (-215:Assertion failed) image.depth() == CV_8U in function ‘write’
```
This is the python script I ran: [DLC-Live_Script.txt (2.0 KB)](https://forum.image.sc/uploads/short-url/dtGoMyAsmQhLiI4Q01lERwyAvRl.txt).
I would appreciate it if someone has some feedback to help solve this issue. Thank you very much.
---
### Comment by [jeylau](https://forum.image.sc/u/jeylau) (November 8, 2022)
[@nishata24](https://forum.image.sc/u/nishata24), your issue would gain visibility if you had opened a new post on the forum.
Anyway, you can typically ignore these tensorflow warnings. As for the opencv error, I looked at your script, and that is because you try to pipe poses (numpy arrays) through the `VideoWriter`: this is only meant to write images. To make it work, you could use `skimage.disk` as we do in DeepLabCut [here](https://github.com/DeepLabCut/DeepLabCut/blob/81244bd851dc3a5c59b21cad4b0ab9c169bb07b8/deeplabcut/utils/make_labeled_video.py#L163-L180), and draw the keypoints on the image, which you then write to disk.
---