The following list of FAQs are related to the Dragonfly engine and the answers applies to the Dragonfly Demo Apps (Android, iOS and Web) and to the Dragonfly Java Application.

Requirements
Features
Integration
Positioning (mapping and localization)
Accuracy
Drones

Requirements

Does Accuware sell monocular or stereoscopic cameras?

No, we do not sell cameras. Dragonfly works with any monocular or stereoscopic camera on the market with the specifics described in this page.

Is an internet connection required to use Dragonfly?

No, an internet connection is not strictly required after the camera calibration process. The Dragonfly engine can run on any embedded device/machine/PC with the specifics described in this page.

Does the Dragonfly engine make use of the information coming from other sensor (IMU or INS)?

No, currently we only provide a location based on the camera input and we have no plans to rely on other external sensors given the super accurate results already provided by the camera input.

Features

Does the Dragonfly engine provide way-finding or routing functionalities (e.g. how to get from point A to point B)?

No, the Dragonfly engine provides the location of the camera, but it does NOT provide routing or way finding information. We provide the WGS84 coordinates (latitude and longitude) or metric coordinates (distance in meters from a point of origin). On top of that it is possible to develop the navigation and way finding using making use of the libraries/SDK offered by one of the many providers available on the market.

Does the Dragonfly engine provide the orientation  (yaw, pitch and roll)?

The Dragonfly Java App provides the pitch-yaw-roll of the device according the aeronautic convention, thus providing the attitude of the camera. We implemented:

  • a view of a 3D animated drone to visualize the attitude (when using the Direct View mode inside the Map tab of the Dragonfly Java App).
  • a view of a 2D compass-like icon to visualize the heading (when using the floor plan view inside the Map tab of the Dragonfly Java App).

Please find here the definition and convention of the attitude angles provided by the Dragonfly engine:

NOTE: please note that when the YAW value gets close to 90/-90 degree, the PITCH and ROLL values diverge a bit. This is due to the conversion from rotation matrix to Euler angles which includes an atan2(a,b) operation at some points, which is equivalent to an atan(a/b) and which “goes crazy” when b is getting close to 0 (when YAW is closed to 90/-90 degree). If you are interested in the exact values of YAW, PITCH and ROLL you can find them inside the Dragonfly CSV files or using the Dragonfly APIAt this link you can find a tool to visualize how the rotation matrix changes with the change of the Eular angles.

Are there other floor plan formats that can be used that don’t depend on building schematics?

Yes, you can use for example the floor plans provided by Micello. If you are planning to integrate inside your final application the maps built with Micello then you have to:

  1. create a floor plan with Micello.
  2. ask to your Micello account manager to “enable the PNG Image Files and GeoJSON Files for your Micello’s account”. This is absolutely needed in order to import the Micello’s maps inside the Accuware dashboard. Otherwise an error will be returned during the import process! The status of the Micello products active for your account can be checked from this page.
  3. import into the Accuware dashboard the floor plan image built with Micello and available in your Micello account following the steps described in this support page.

How fast can go a device that is making use of the Dragonfly engine?

We have made several tests and we have been able to get excellent results with devices going at up to 10 Km/h.

Integration

How can I convert coordinates (latitude and longitude) into pixels (X,Y) of a floor plan image?

Please look at this article.

Do you provide any ROS integration for the Dragonfly engine?

Yes we can develop a ROS integration for the Dragonfly engine upon request. Contact us for more information at this link.

How the Dragonfly engine fits in a typical UAV navigation architecture?

Depending on the computing unit available on-board, we recommend either:

  • local video processing (better latency).
  • or remote processing if a low-latency and low-loss network is available to transmit the video from the UAV to the remote server.

Is it possible to install Dragonfly in a Docker container?

Yes. You can find the instructions inside this page.

Is it possible to install Dragonfly on a Windows machine?

Yes but with some limitations. You can find the instructions inside this page.

Positioning process (mapping and localization)

How big can be the area mapped?

We have customers who have mapped up to 40.000 sqm. Despite this we recommend to limit the area covered by a single map to 15.000 sqm. Anyway the constraints are the RAM and CPU available by the computing unit running the Dragonfly Java App (we are working on reducing these constraints). To provide some numbers the mapping at slow speed of a warehouse of 15.000 sqm:

  • generates 160K map-points inside a map file that will have a final weight of about 600 MB.
  • takes about 2 hours to be completed.

Why the location is LOST during the navigation session?

The “Lost” status happens because of different reasons:

  1. The environment is “too plain” and it is impossible to detect enough features (reference points). Think about an area with many white walls all equals each others.
  2. The monocular camera does a pure YAW (like a drone rotating on itself) or PITCH rotations (described in one of the previous FAQs). This is a mathematical limit and it can be overcome by:
    • using a stereo camera.
    • or, with a monocular camera, by doing rotations in conjunction with translations (like a turning car).
  3. The field of view is limited (like on smartphones/tablets using the Dragonfly Demo App for iOS and Android). Unfortunately smartphones have a limited field of view, and this limits the ability to map fluently an environment. This is the reason why we suggest using a wide angle camera in production with the Dragonfly Java App (with FOV of 160-170° on monocular cameras and with FOV up to 120° on stereo cameras).

When you get lost you should get back to a previously known location.

How should I properly perform the mapping of a big environment?

What we would suggest to do is to:

  1. Ensure to close at least one loop surrounding the considered perimeter.
  2. Then, map the internal area while regularly coming back to known places to close additional loops.

If this is done, the positioning is going to be accurate and the drift will be corrected by loop-closing done automatically by the Dragonfly engine on a regular basis:

Here is an example of calibration and mapping process

How should I properly perform the mapping of an area made of physically separated sub-areas?

If you don’t need to map the area in between the sub-areas then our recommendation is to create multiple maps, one for each sub-area, and to automatically load the map corresponding to the current sub-area using this Dragonfly API call. You can use for example the GPS info to get the macro location needed to load the correct map. This approach has the advantage of dealing with multiple small maps instead of a big one, which is better for the performances.

Can the Dragonfly engine detect the altitude of the camera from the ground?

Absolutely! If the visual markers are well placed and the calibration of the visual markers (or virtual markers) is done properly, the altitude will be accurately provided, so you can know if the device is for example 15 cm from the ground. The closer you are to the object, the better you know the relative camera distance to this object.

How should I perform the mapping of an aisle in which there will be a drone flying at different altitudes?

If you know in advance the trajectory the drone is supposed to take during its regular usage, then you should simply perform the Positioning process by following this exact trajectory with the drone flying slowly and looking exactly to the direction(s) it is going to look in during its regular usage. So, for example, if you know in advance that the drone will fly at 2 different altitudes (e.g. 2 meters and 5 meters) you will have to perform the positioning process twice:

  • one with the drone flying along the trajectory at 2 meters.
  • one with the drone flying along the trajectory at 5 meters.

How far can the objects be detected and become part of the map?

There is no minimum distance as long as the objects can be seen in the image. However, the further the objects are, the less accurate is the triangulation (because the pixels move less from a frame to the other). We would say, safely, for objects further than 30 meters the mapping could be an issue, but honestly it is pretty rare that, in indoor cases, there is not a single object (and thus feature) visible at less than 30 meters.

Is there a switch from proper time to switch from Positioning (mapping and navigation) to Navigation only?

Normally, in a small venue there is no need to enable the Navigation mode. But if you’d like to do so, it is good to switch when you have the perception that Dragonfly has already mapped the whole venue and that you are able to navigate from nearly any position without getting lost.

Accuracy

Can the Dragonfly engine provide an average radius of accuracy greater than + – 10 cm ?

The accuracy of a computer vision system depends not only on the system itself, but also on the surroundings of the camera. With a proper camera calibration and accurate visual (or virtual) markers in the venue, the accuracy is about 10 centimeters in a standard environment (objects at about ~10 cm from the camera). To have a better accuracy, the system would have to run at higher resolution, but currently the additional processing power required will be so huge that, at present, we are not willing to consider this option.

What is the accuracy provided by the Dragonfly engine in an un-mapped area during the Positioning process?

In an un-mapped area (while the Dragonfly Web UI shows NAVIGATION) there is a drift which will accumulate over time. It is difficult to provide an accurate estimate of the accuracy in this situation because it really depends on the venue features, on the motion of the camera and on the quality of the camera calibration. We can say that the drift is high enough in monocular mode to NOT recommend relying to the location provided by the Dragonfly engine in an un-mapped area after a minute of navigation. More info can be found inside this page.

Why is there an angle between the absolute horizontal plane of the real-world and the horizontal plane computed by Dragonfly?

Without world-references, if your camera is not perfectly horizontally held during the MAP INIT stage, there will be an angle between the absolute horizontal plane of the real-world and the computed horizontal plane of your device shown inside the plot because the Dragonfly engine has no way to know exactly what is the real-world horizon. So the Dragonfly engine assumes that the floor is on the same horizontal axis as the horizontal axis of the camera during the MAP INIT phase.

Why there is a drift between the real location and the one estimated by the Dragonfly engine?

The drift you are encountering could be due to various factors, as well:

  • A bad camera calibration.
  • A challenging environment where the scale of the map is hard to be kept consistent (ex. a building with a lot of white walls). This is described in one of the FAQs below.
  • A long monocular navigation path for which the drift is accumulated. The drift can be corrected by performing a loop-closing, that we strongly recommend in the monocular mode. So basically, you should navigate inside the building, close a couple of loops, and save the map. Then this map will be used as a basis for navigating your device and other devices.

More info can be found inside this page.

How much Dragonfly is robust to changes of the environment previously mapped?

The Dragonfly engine is capable of improving the accuracy of the locations computed when used continuously in the same environment. This happens as long as the features of the environment in front of the camera do not change of more that 30%. If what is presented in front of the camera changes of more than 30% from what has been seen previously there can be 2 situations:

  1. if the camera reaches a previously known place, which has now changed more than 30%, from another place which was properly identified – in this case there won’t be problems. The map will be properly updated.
  2. if the camera suddenly sees this previously known place, which has changed more than 30%, without having a previous history to recover its path (how it got to this unknown place) – in this case, the Dragonfly engine won’t be able to recover its position until the camera sees a place that it can clearly identify.

How the lighting conditions affect the accuracy of the Dragonfly engine?

The lightning conditions affect the system performances. If the shapes are clearly visible by the camera, and if the contrast is good, the Dragonfly algorithm can works properly. If there is a huge back light making the rest of the scene looking obscure, then the position won’t be available. The algorithm is particularly sensible to back lights.

What are the known environmental conditions where the localization algorithm’s performance is challenged?

Un-textured environment (uni-color walls), environments with back lights, environment where the texture is mostly the same wherever we are (subway tunnels for instance).

How does the Dragonfly algorithm behave when used inside a corridor or aisle?

The fact that the area is narrow will make the system pretty accurate. We would say that it is possible to reach an average radius of accuracy of ~10 cm.

Does the Dragonfly engine provide a score of the reliability or quality of the locations estimated?

We do not provide such a “score” yet, but this is indeed something we should consider doing.

Is there a drift (over time) of the locations estimated if the camera is fixed and looking at the same position?

You can expect a noisy position (about a 5 to 10 cm depending where you look at) but there won’t be a drift! The average position is perfectly stable in this situation.

What happens in the eventuality of a complete camera occlusion?

No more position until the camera re-identifies a known place. Usually, if we talk about a 1 second occlusion, the system will be able to recover immediately after.

How would the system differentiate between two aisles with no inventory in them?

If there is absolutely no difference between two aisle, then the system will indeed have troubles to re-localize itself. It has basically the same limitations as a human being.

How accurate is the algorithms localization on the Z axis?

The Z axis has the same accuracy as the other axis. About ~10 cm usually.

Drones

What are all the options to send the video stream coming from a drone to the Dragonfly Java App?

It depends on the specific model of drone:

  • DJI drones – Are those we recommend because their video stream can be easily accessed by Dragonfly with almost no latency using a special library for Android developed by us for this purpose: the Accuware DJI streamer library. It is possible to find the full updated list of DJI drones compatible with the Accuware DJI streamer library at this link.
    • Note: the Accuware DJI streamer library is a separated product from Dragonfly, and requires a separate license. It also requires a bit of Android and DJI SDK knowledge (which is needed in any case to control a DJI drone).
  • Parrot drones – At present we have no experience with Parrot drones. It seems that it is possible to get the RTSP stream directly from the Parrot drone or through the SkyController3. More info inside this page. Anyway we have never tested it so we can’t guarantee anything about the latency of the stream (which is one of the most important things when dealing with flying objects moving fast).
  • Any drone – An option is to equip a drone with a Raspberry Pi Zero W which is cheap, super light (nearly no payload for the drone), and can be equipped with a small CSI camera. At this link you can find the instructions about this.