Hello everyone!
It says that the inference must be run in real time, but I couldn't find any information about the hardware requirements. The model will be run on what kind of hardware? GPU, CPU, memory, thoughput (number of videos send to the model in parallel) etc. All this informations can help choose the correct model for the task
To run the inference in real time, you'll need robust hardware, typically a high-performance GPU. Specific requirements for CPU, memory, and throughput (number of videos sent to the model in parallel) will depend on the particular model you are using. It's best to refer to the technical documentation of the model for detailed hardware specifications. crossy road