My company recently purchased an M300RTK system complete with the L1 and P1 payloads and the D-RTK 2 base station. I am a land surveyor and I have been performing photogrammetry with DJI systems (phantoms and inspires) for over 5 years with great results and have over 20 years of photography experience. I was very excited to get my hands on the P1 and to start utilizing LiDAR with the L1 but have encountered some severe issues with the implementation of these 2 systems as well as the flight controller DJI PILOT and am hoping that bringing these to the forum will get DJI to take note and make some changes with future firmware updates. If you share any of these opinions, please share your thoughts here so they know I am not alone in my thinking! I'll list my concerns below by the payload with an explanation/description below each numbered point.
Zenmuse L1
1. IMU Calibration is not available during a Terrain Following mission.
I am based in western Washington State, USA and the terrain here is never flat. Terrain following is an absolute necessity for nearly every project we have to ensure proper coverage and overlap is achieved. DJI states that an IMU calibration should be performed every 1000m or 100 seconds of flight to ensure the best results but it is not possible during terrain following missions. This means I perform the calibration before the flight plan and after but there is no option to pause the mission and perform a calibration again during a mission covering a large area. There should be an option to enable IMU calibration during the terrain following mode even if it is something that requires a manual pause of the mission and calibration with a resume option after.
2. Terrain following is not available during a Corridor Mapping mission.
Again Terrain following is a necessity, not a nice option. To exclude it from corridor missions is insane and completely removes the corridor mission feature from my use and I'm sure many other users. Not having the ability to use the corridor mission type means I have to manually create separate flight paths for each segment of a corridor that is at a different angle, which leads to having multiple point clouds and trajectory files with no good way to combine them into one cohesive point cloud and trajectory file for editing as a whole for processing. Even if I load all of the multiple files into DJI Terra and process them together, Terra spits them out as separate files which complicates processing in Terrascan as they don't have an option to import more than 1 file set at a time.
3. LiDAR data is collected at all times during a Terrain Following mission, even during turns.
During an IMU Calibration flight the L1 stops collecting data during the turns and only collects data on the straight portions of the flight lines. This is not the case for terrain following missions as LiDAR data is collected during the entire flight from the start point to the end point. This leads to extremely noisey and inaccurate point clouds and creates the need to manually filter out all that data collected during turns and any navigational flight lines. There should be an option to only collect data during the planned flight lines and not during the entire route. This would greatly improve data accuracy and reduce the noise in the raw point cloud.
4. No option to set forward overlap or use DNG instead of jpeg format images while using LiDAR "RGB coloring" mode. Not being able to set a desired forward overlap is a crippling miss. If there is a need to fall back on photogrammetry, we should be able to set this parameter to ensure we get enough data to perform quality photogrammetric processes. As it is I can not even find any documentation as to what it is set at as a default. This should be added as a slider in the advanced settings. Furthermore, using DNG format files is a necessity for quality photogrammetry when using a sensor like the 1" sensor in the L1 due to the increased dynamic range when compared to the overly compressed and processed jpegs. Not being able to select that as an option is very disappointing and should be addressed.
I realize that these issues are all primarily flight controller based but only apply to the L1 payload. 5. The Sensor is scanning slightly off center by about 5% implying that it is not facing at a true 90 degrees/Nadir(straight down) orientation. Upon processing the point cloud and isolating the individual flight lines I have noticed that the right side(when the drone is traveling forward) is scanning further than the left. For example, at an altitude of 250' AGL the swath right of the flight line extends to about 245 feet from the flight line while the swath to the left of the flight line only extends to about 209 feet from the flight line. This implies that the sensor is not oriented at a true nadir or straight down position and is instead pointing slightly to the right. This needs to be addressed as the overlap is affected. It should be notede that during a mission with a setting of 70% overlap I AM seeing a 70% overlap on the short sides and over a 80% overlap on the long sides of the swath. This would imply that this off center orientation is being taken into account with the overlap settings but seems like it would be more efficient use of flight time if the swath was centered and overlap would be identical on all sides of the flight lines. I am curious to see if any other users of the L1 have noticed a similar situation or if this is specific to my sensor/payload.
Zenmuse P1
1. EV (exposure) meter does not correspond to the DNG files. There is no replacement for using manual camera settings when it comes to photogrammetry. Using automatic camera settings is often the difference between getting quality repeatable results vs getting questionable, unreliable and inaccurate results in your 3D data reconstructions and outputs. being able to accurately monitor your exposure as it applies to your raw DNG files is beyond critical. For some reason the EV meter corresponds to the JPEG files but not the DNG files. When I exposed based on the EV meter and the preview on the screen my settings had me under exposed by about -0.7 EV. However, when I downloaded my DNG files they are over exposed by nearly 1.5 EV. meaning there is a 2-stop difference between the DNG and JPEG files. This makes no sense and is not something I have encountered in ANY of the many phantoms, inspires and even mavics I have used for years. This is nothing I have seen in any of the cameras I have used in the past 20 years. Typically, the exposures of the DNG vs JPEG files are nearly identical with the main differences being increased dynamic range in the DNG files as well as zero compression and a "flatter" color profile. I have never seen such a drastic difference in exposure results and this leads me to believe there is some sort of darkening algorithm being applied to the JPEG files during the internal processing. Why? This should not be the case and it leaves me with no way to correctly set my exposure for DNG files.
2. All smart camera features have been stripped from the system. There are no options to utilize any of the photo modes found on DJI's consumer offerings. My typical workflow includes performing a 360 panorama above every project site to give its surroundings context. This mode has not been included in the P1 system for some reason. Why would DJI strip away features found in consumer level drones out of an Enterprise system that we are paying 3 times (or more) the price for. These photo modes should be added to the feature modes, if anything we should get more options than the consumer level offerings.
3. Camera aperture has been limited to only a few options. When setting the f-stop of the camera lens it goes from f2.8 to f4 to f5. 6....What happened to all the stops in between. Even the L1 has the option of using f3, f3.2, f3.5 and so on. Why would this not be the case on such a vastly superior camera system?
4. Video settings have been stripped down to only 2 options. Setting up to do a fly over inspection of a site I was baffled when I could only select 4k/30fps or 1080p/30fps. Why would DJI strip away the ability to utilize 720p and 2.7k resolutions? Why would DJI strip away the ability to utilize industry standard 24fps and 25fps frame rates, why wouldn't they offer 60fps at lower resolutions? It doesn’t make sense to have such an awesome camera sensor and strip away all the features that would make it useful.
I'm sure I will encounter more issues but this is what I have for now. Please let me know your thought or if I have missed a way to accomplish any of these things. I recognize that I am new to these 2 systems so if there is a way to accomplish any of this, I would love to hear ideas. Thank you!
|