The Dragonfly Preview App helps you assess the extent to which Accuware Dragonfly Technology meets your needs for very accurate positioning in locations where GPS is not available, not reliable or not enough.

WARNING: this is a demo App so before proceeding please be aware that the the FOV (field of view) of the standard smartphone cameras is typically narrow and this means that the Dragonfly engine cannot detect a large amount of features in the scene. This could causes the Dragonfly engine to get lost if the device moves too fast or if the device makes pure YAW rotations.


The following are the prerequisites that MUST be met by the iOS device to run the Dragonfly Preview App:

  • at least iOS 10.0
  • valid Internet connection (Wi-Fi or cellular data) that will be used to upload the video stream in real-time to the Accuware server. If you are using a Wi-Fi connection these are the requirements that MUST be met:
    • accessible domain: ​ – this is the address where the Accuware Dragonfly Preview App sends the video stream.
    • ports (outbound): 443/tcp (HTTPS).
    • minimum bandwidth of 0.5 Mbps. It is preferable to be connected to a 5GHz WiFi network or to use a good LTE connection when available. 2.4GHz WiFi is indeed known for providing a variable connection speed due to network saturation, which can lead to frame drops and thus, a positioning experience.
  • 80 MB of storage space on your device to install the Dragonfly Preview App.

The Dragonfly Preview App

The Dragonfly Preview App can be downloaded directly by clicking this link from the iOS devices, or by searching Accuware Dragonfly Demo Preview on the App Store.

First start of the App

After first start the App will ask you for your permissions to use the Camera. You have to allow the camera access, otherwise the App won’t work.

CAMERA PRIVACY: we DO NOT store the video streams sent to the Dragonfly server. The video streams are simply processed in real-time in order to compute the location of the camera.

Main screen


The Dragonfly Preview App consists of a major entry screen in landscape mode (there is no portrait mode), which shows three symbols from left to right:

  • Help button
  • Start button with the Dragonfly (to start calibration or positioning operation)
  • Settings button



The settings screen provides means to configure the App. The demo app supports two mutually exclusive modes of operation, which are configured here:

  • Camera Calibration
  • Positioning

To get the maximum positioning accuracy it is strongly recommended to do a Camera Calibration before testing the Positioning (because each camera has its own optical distortion parameters to determine).

NOTE: if you don’t perform the camera calibration OR if your camera calibration parameters are NOT better than the default camera calibration parameters you will be reminded to perform a camera calibration whenever you enter the Positioning mode.

Calibration of the camera

The camera calibration is a one-time process. To perform the calibration you need a calibration pattern that you can download here:

  • Open it on the screen of your PC or print it out on an A4 or Letter paper and place it on a plain table/desk.
  • Measure the distance in millimeters (NO inches no centimeters!) of the HDIST line visible inside the calibration pattern. The value should be greater than 100 mm.

  • Configure your distance in the field “HDIST” available inside the Settings of the App.
  • Change the “Operational Mode” to Camera Calibration.
  • Exit from the Settings.
  • Start the Calibration by touching the pulsating Start button with the Dragonfly.
  • The App will immediately establish a connection to the media server. Please make sure that you get a connection that shows you between 20 and 30 FPS. If this is not achieved, try connecting again. Lower frame rates are generally not unusable, but make calibration more difficult.


  • The server presents you a  pulsating colored pattern overlay, surrounded by a black frame with a positioning indicator in the top-right corner.
  • If you hold the camera of your device in front of the calibration pattern you should see the black dots being connected by colored lines.
  • Your challenge is now to match the pattern displayed inside your device with your calibration pattern plate that is laid on your table/desk. Whenever you come close to the expected pose, the green gauge at the bottom becomes smaller, until it disappears at all. Then a snapshot is made automatically. If audio is enabled, you will hear a typical camera shutter sound and a toast informs you about the number of snapshots total.


  • The server will present at least 21 poses, which all need to be matched.
  • After you have finally made it, the server automatically starts the computation of the calibration parameters of your camera and you will be notified by sound and toast, once this process is finished. The calibration file of your camera are stored.
  • You will be asked, if you want to automatically enter the Positioning Mode (described in the next paragraph).

Mirroring the image: if it helps you to coordinate your movements, you can have the image displayed mirrored by a long touch on the display. This is especially helpful, if you are using the front camera.

Usage of the FRONT camera: by default the App uses the REAR camera. This makes sense, since you might want to get positioning results while you are walking and looking “through the device”. You can overwrite the default selection by enabling “Use front camera” in the Settings (which is OFF by default). But then you probably only have the possibility to orientate yourself on the ceiling (which is anyway a good way to navigate in populated locations with SLAM technology). The App stores the calibration results separately for front and rear cameras.

Test the positioning

The Positioning Mode consists of the two sub processes: Mapping and Localisation (sometimes also called Navigation). i.e. exactly what the term SLAM means – simultaneous localisation and mapping:

  • With mapping, a mathematical model of the spatial conditions detected by the camera is first created in real-time and continuously adapted.
  • The parallel location/navigation process uses the mapped results and transforms the position of the viewer (your camera) into un-dimensioned raw XYZ coordinates.

IMPORTANT NOTE: when using the Dragonfly Java App, the WGS-84 coordinates (latitude, longitude and height in meters) can be displayed after geo-referencing the 3D model thanks to the usage of 3 or more visual (or virtual) markers.

  • The Positioning Mode starts with the help of calibration parameters, which needs to be available.
  • At start, the system state is MAP INIT. A slow translation motion (to the left or to the right) of the camera is required in order to initialize the system (please avoid pure rotation motions!). We recommend to perform the initialization in a scene in which as many small objects as possible
  • Once done the system state is NAVIGATION. At this point blue marks attached to recognized features are shown inside the camera preview (if allowed in the Settings with the option Feature display enabled).
  • At the same time, a plot is displayed and the real-time location of your camera is shown as a drone in the center of the plot!


  • Use the usual pinch zoom gestures to zoom and rotate the plot if needed or to change the perspective on the scene.
  • The drone may already have a RED thread behind it. This RED thread is your track. While you are roaming, your position is tracked and the path is drawn in RED. Once again, please note that the displayed positions have no real spatial reference in the sense of our usual coordinate systems. However, you should already draw a clear and repeatable red track after some back and forth.
  • The flying drone is also reflecting the attitude of your device (pitch, roll and yaw).
  • The color of the drone’s light changes depending on the status of the positioning mode:
    • GREEN – the Dragonfly engine can compute your location.
    • BLUE – the Dragonfly engine can NOT compute your location. In this situation, your device should also express a corresponding audible disappointment (“LOST”) and at the same time, all previously displayed blue feature marks should disappear. You can only correct this situation by slowly moving back to the position where you were last in the “NAVIGATION” state. Sometimes only a map reset helps to get back on track.

The display of your position on the plot is done with the help of WebGL. Not every device offers a corresponding support. In case of a problem you will be informed. The small blue window in the lower left corner shows you how powerful your device is in relation to WebGL. Displays of more than 30 fps are perfectly sufficient to allow a smooth position display.

In the lower part of the screen you can find several buttons, from left to right:

  • X – to exit from the Positioning mode.
  • Back – to reset the map or to clear the path.
  • Pause – to pause the Navigation mode.
  • ►- to start the Navigation mode.


PLEASE READ: The following list of FAQs are related specifically to the Dragonfly Preview App for iOS. If you are looking for the FAQs related to the Dragonfly engine please read this page.

Why is there a big variation in the scale of my trajectory between 2 different Positioning sessions?

As in this demo we don’t use either geo-reference (floorplans, visual marker) or a dual-camera system, it is mathematically impossible to estimate the actual scale of your venue. For this reason, between runs, you may encounter big variations in the scale of your trajectory, and sometimes have the feeling the amplitude of your path is too big or too small. We cannot address this issue as this is a pure mathematical limitation, but usually, we observe that if you initialize the map by looking at object far away, the amplitude of the trajectory will be small, and if you initialize the map by looking at close objects, the amplitude of the trajectory will be huge.

The App switches to LOST right after the beginning of the Positioning session and it is impossible to recover.

In rare occasions, the algorithm starts to generate an erroneous model when you start navigating. If so, you will notice that the system switches to the LOST status right after map initialization, and it is usually extremely hard to retrieve a position when it happens. The workaround to fix this issue is to start a new map by using the Back button and Resetting the map. Note that clearing the path won’t have any effect as it does not delete the erroneous model but just clears the display.