BGA
Second Officer
Offline
|
I played a bit with the object detection stuff in the Robomaster S1 and I found some considerable issues that it would be great if DJI would be ablee to address. They all boil down to performance issues and I ma pretty sure this should have not been an issue considering the CPU the robot is using (you can confirm that by simply looking at its build in person tracking).
1 - A simple while loop that does object detection (by calling [color=rgba(0, 0, 0, 0.65)]vision_ctrl.get_car_detection_info()) is running at 10 FPS at most. This is simply way too slow and I suspect it is not the cost of doing the image detection itself but the fact that the python interpreter DJI is using is way too slow. You can do better DJI!
[color=rgba(0, 0, 0, 0.65)]2 - Image detection itself seems to be running in an even lower FPS (around 4 or 5). The end result of this is that even with a robot moving, 2 consecutive calls to get_car_detection_info() might return virtually the same information (+- some error). This is what ends up making the robot overshoot when trying to target something at faster speeds.
[color=rgba(0, 0, 0, 0.65)]
[color=rgba(0, 0, 0, 0.65)]For 1 there is nothing I can do unfortunatelly. Just hope that DJI will improve the performance of its interpreter.
[color=rgba(0, 0, 0, 0.65)]
[color=rgba(0, 0, 0, 0.65)]For 2, I used some code to re-read the detection info in case it seems that it did not change between 2 interactions. That helps, but this only makes the low rate of object detection be even more obvious.
[color=rgba(0, 0, 0, 0.65)]
[color=rgba(0, 0, 0, 0.65)]Anyway, if you are writing S1 programs, be warned.
[color=rgba(0, 0, 0, 0.65)]
|
|