An Introduction to visual SLAM, Simultaneous Localisation and Mapping

Accuware Dragonfly, a visual SLAM technology based on computer vision, provides accurate location to robots, drones, machines and vehicles. But, what is SLAM? How does it work?

This article wants to give a brief introduction to what SLAM is, how it works, what it’s for (and what it’s not for), and why it is important for the new industry revolution.

The importance of Simultaneous Localization and Mapping (SLAM) is constantly increasing, not only among the computer vision community, but across multiple industries. It is receiving specific interest from augmented and virtual reality industries, and from the robotics and automation sector.

SLAM is in fact now able to address localization problems that many industries have been facing along the years.

There is however large variety of SLAM systems available, from the academic world and from the industry: in this context, and to clarify the typical confusion around this new technology, it is worth exploring what SLAM means and how it works.

What is SLAM?

‘SLAM’ does not refer to a particular algorithm or specific software: it rather refers to the problem of simultaneously localisation (know the location/position and orientation) of a device with respect to its surroundings and at the same time create a map of the environment.

SLAM can be done in a number of different ways: SLAM is not strictly a computer vision topic, and it could also work with other technologies, such as lasers scanners and LiDARS.However, in this article, we will focus on visual SLAM, which is the most innovative technology. In fact, at Accuware, we have decided to focus on the development of Dragonfly, our visual SLAM system, to offer a valid alternative to other SLAM technologies that rely on specific hardware, such as LiDARs, and to create a brand new positioning algorithm.

Computing both the position of the device and the map, through the on-board camera, when neither are known, distinguishes the SLAM problem from other technologies.

For example, 3D mapping/reconstruction with a fixed camera rig is not SLAM, because while the map is being recovered, the positions of the cameras are already known and fixed. SLAM instead provides the ability to recover both device’s pose and the map structure, initially knowing neither of them.

It is important to note that this if one of the key features of SLAM: computing the pose creating the map in real time is in fact what makes SLAM different from other systems. This also means that the processing is typically “on the fly” so that the camera’s location is continuously known and updated.

Dragonfly, however, is able to also post-process existing videos: this is extremely useful in order to improve the accuracy inside some challenging environments, and to perform preliminary tests to estimate the final accuracy of the system, without being on site.

A Brief History of SLAM

The first researches on SLAM began among the robotics community: 1986 papers by Smith and Cheeseman are usually indicated as the first technical documents about SLAM, originally applied to wheeled robots on a flat ground. The first SLAM systems were combining different sensor readings (laser scanner or sonar, for example) with data from the control input (steering angle) and mechanical measurements (such as wheel rotations counts).

In recent years, instead, visual sensors have become a crucial aspect of SLAM research: the improvement of computer vision techniques and the high computation power of processors are opening a new era for SLAM.

Many studies on visual SLAM focused on the use of stereo cameras, or cameras in combination with other sensors (“sensor fusion”).

At Accuware we have decided instead to explore the pure computer vision SLAM, using only monocular cameras, without external sensors. While Stereo Cameras can be used with Accuware Dragonfly, they are not required.

Our scope has been to make SLAM a widely useful technology that does not require additional hardware or sensors. We wanted to deliver a precise location system based on visual information that can be derived from an existing on-board camera, removing the need of sensor fusion and of other hardware to be mounted on board of robots and drones.

How SLAM Works

Dragonfly analyzes the video stream coming from the device’s camera: it keep tracks of a set of points (“features”) through multiple camera frames and uses them to triangulate the 3D location of the device and create a virtual map of the environment. At the same time, Dragonfly can use the estimated point locations to calculate the camera’s pose.

With the use of a single monocular camera, carefully merging the different features detected over multiple frames, Dragonfly can elaborate the pose of the device (6-DOF) and map the structure of the surrounding ambient with high accuracy, up to an accuracy of 5 cm.

Dragonfly also includes the ability to improve the map quality over time, to increase the accuracy, and leverages loop closure: this automated procedure makes it possible to reduce the gradual accumulation of errors over time. The current location computed by Dragonfly can be associated to a preliminary known location inside the map (Visual or Virtual Markers), optimizing the map structure and reducing the accumulated error.

The map is then used to perform relocalisation: if the device experience a low tracking performance, which can lead to the system getting “lost”, Dragonfly is able to recognize a previously detected “feature” and use it as a marker to compute the relative location inside the map.

Relocalisation is also useful to start the positioning of the device from any place inside an existing map: the starting point is automatically recognized by Dragonfly, analyzing and recognizing the surrounding features.

SLAM in Real Applications

Now that we know how SLAM works, how can this system be useful in real life? How can Dragonfly be applied to actual projects?

Visual SLAM is nowadays needed in many different applications.

Dragonfly is used to remotely track the location of moving vehicles, such as forklifts, inside large environments. Dragonfly’s ability to dynamically update the map is extremely important in similar venues, which are subject to constant changes. Some of our customers are leveraging Dragonfly to monitor the usage of machines and ground robots (think about industrial cleaning machines, lawn mowers…), and others have been installing Dragonfly on board of flying drones to improve some operations inside GPS denied environments, such as inventory management.

We work with customers that have been developing self-driving robots and vehicles, and who are exploring autonomous navigation for drones as well.

In the era of automation, with the roll out of unmanned vehicles and with the beginning of commercial drones’ applications, Dragonfly is becoming an essential technology to provide positioning where GPS is not an option and where centimeter accuracy is necessary.

See also

Posted in Uncategorized | Comments Off on An Introduction to visual SLAM, Simultaneous Localisation and Mapping

Postcards from Ecsite 2018

Ecsite, the international non-profit whose name is the acronym of European Collaborative for Science, Industry and Technology Exhibitions, has been an event since its founding in the early 1990’s. Bringing together a European network of science centers and museums, its vision is to foster creativity and critical thinking in European society, encouraging citizens to engage with science. Ecsite’s history provides a fascinating glimpse into the evolution of an institution whose mission is to inspire organizations that engage people with science, not just in Europe, but also worldwide.

Our partners at Wezit attended Ecsite 2018, which was held at the Natural History Museum of Geneva, Switzerland, June 7 through 9.  Wezit’s Ségolène Valençot shared with us some vignettes of various gatherings she attended.  These are her postcards from this event.

The setting

We are still on an #Ecsite cloud mode!  This year’s conference took place in the beautiful city of Geneva, during the first days of June. Perfect weather, the air was filled with the energy and the great ambiance of 1,182 professionals from 58 countries, all gathered for three days to reinvent how we communicate, teach, learn, and think! After all…no wonder why this year’s theme was Creative collisions!

Great audience asking key questionsThe art of recontextualizing collections

Wezit attended the Recontextualizing Collections panel; the speakers presented real case scenarios on how museums and their collections can increase the contact with their audiences.

First, we listened to Maria João Fonseca, the Interim Executive Coordinator of the Natural History and Science Museum of the University of Porto, who shared with us how they made a priority to engage their visitors. Thus by creating stories within stories, museums within museums, using the documentation available in the collections. For example, by interpreting the 1930’s dreamy Portuguese poets, displays are used to showcase the Whale skeleton at the Museum of Porto Gallery.

A particular thought to the idea of museums within museums was taken by a scenography incorporating Cabinets of Curiosities dating from the 19th and the beginning of the 20th century. Touching the visitors’ emotions, moving their curiosity, attracting them by using a mix of stories, as well as history and literature – the idea was to give life to objects and ultimately use them as a learning tool!

Lastly, Fonseca explained how important it is to evaluate how visitors experience and understand these museography proposals and scenographies.

Starting from more or less the same principle of the “museum within museums” idea, Beat Hächler, Director of the Swiss Alpine Museum at Bern, Switzerland, gave us his insights on  “museum intimacy”. He talked to us about dropping the permanent exhibition principle and go for a kind of mini-exhibiting units, such as pop-up schemes or event venues. Moreover, he emphasized the needs of creating unique spaces that help visitors to connect to the collections, those museum objects surprising them by finding out about things they never knew about the collection – as if the exhibit and the visitor were in a conversation and they shared their stories. Swiss Alpine Museum, Bern

Bringing visitors closer to the museum and making it more available to them by revealing the “behind the scenes” aspects of the collection and its motivations, is also a priority for Grégoire Mayer, co-director at the Musée d’Ethnographie Neuchâtel.

And if you enjoyed Shawn Levy and Chris Columbus’ film “A night in the museum”… then you will agree with Hervé Groscaree, from the Natural History Museum of Geneva,  and invite visitors to sleep and do overnight museum activities , move with visitors in the nature and feel authentic experiences in relation with the collections of the Museum!Museum d'Histoire Naturelle

Balancing Renovation

Even though it does not compare the size and magnitude of a museum, if you ever went through a home or office renovation while still living at your place, you would relate to this session and be open to learn and listen about what the speakers & participants have to share!

Every once in a while, every museum, science center – no matter their size, will eventually go through a renovation. Not only construction-wise, but also concerning the updating of their presentation techniques.

Some institutions will opt to open their doors while going through these improvements; others will temporarily close, others will make key pieces of their collection available to visitors, such is the case of the Musée Lorrain. Wezit completed tactile digital tables showcasing key pieces of their collection, while the museum’s temporary closure, and those are available online for visitors to admire until they reopen their doors.

As part of a constant search to understand the challenges museums and institutions go through, we attended the Balancing construction works and visitor satisfaction Conference at the #Ecsite2018 conference. In this cooperative gathering, speakers share their experiences, strategies and results, adaptable to smaller and to significant bigger organizations.

Speakers from centers located in Amsterdam, Belgium and Germany, pointed out in a TO DO LIST presentation general tips, ideas ranging from financial planning, visitor communication as well as content development.

The real case scenarios presented included the following museums, which were there as participants to this session:

The Jaermuseet in Norway: They mastered in getting feedback from their visitors with customizable surveys, and open-ended questionnaires.

The Jaermuseet obtained an average of 80% responses from families and teachers. The results had a satisfactory rate, which motivated the staff, and uncovers ways to improve. Moreover, surveys referring to open and recreational spaces echoed the museum’s presentation and how it is perceived by its visitors. Overall the Jaermusset focused their visitors as the best museum’s ambassadors!Balancing renovation

On the other hand, more interesting real case examples discussed were: The Eureka Science Center which has been conducting visitors analysis since the 1980’s, enabling them to have a good knowledge of what they offer in their cultural and scientific presentation.

Visitors identify the Center as a place to do family activities; furthermore, the tracking of their visitors’ behavior from the moment they buy their ticket online for the duration of the tour, and lastly the rise of the website and social media as a medium to obtain what motivates and interests their audience.

Maarten Okkersen talked to us about the Internet component in the museum sphere and the power of blogging moms that follow the museum’s latest happenings. How they can be a good pairing with these institutions, which leads to the importance of knowing the basic of SEO and the interpretation of google analytics.

Furthermore, how it can attract those that are considered visitors and non-visitors, hence, the museum becomes not only a cultural place but also perceived a place of fun and entertainment.

Finally, the Copernicus Center in Warsaw shared their views on balancing between data and intuitions.. Now you should be in route towards your institution’s customer satisfaction journey!

About Wezit and Accuware

At Accuware, we love Wezit.  Their innovation and creativity shine through in every project they touch. Since our first collaboration, supporting indoor navigation on their Ma Visite app with Accuware location technology at the Nantes Museum of Art, there has been great synergy between us.

Having Wezit share what they learn at industry-specific gatherings like Ecsite, often helps us visualize potential new applications of our technologies, to implement solutions that address their challenges.

Wezit’s people are true innovators in many creative ways, always focused on educational institutions.  At Accuware, the location technology providers, we look forward to a long term collaboration for many years to come.Wezit-Mazedia

Posted in Business Insight | Tagged | Comments Off on Postcards from Ecsite 2018

Oxford Street project: big data and virtual reality in urban design

As we know, urban planning concerns itself with the design and regulation of the uses of space in the urban environment, and on the location of human activities within it.  Regarding existing urban settings, a key concern is understanding how the space is currently used, and how it would be affected by change, both from redesign, or simply from increased use over time.

This story, shared by Senior Transport Consultant Francesco Angelelli, highlights how Atkins, a leading international consulting company, used an innovative approach leveraging the latest technologies to study human behavior and use patterns at an iconic place: Oxford Street in the City of Westminster, one of the most famous destinations in London.Oxford Circus

The Urban Setting and its Challenges

Originally part of the Via Trinobantina, a Roman road between Essex and Hampshire, Oxford Street has existed for over 2,000 years.  It became Oxford Street in the 18th century, when it began to change from residential to commercial.  Today, it is Europe’s busiest shopping street, with half a million daily visitors.

Because of its popularity among tourists and shoppers, the street’s capacity is under increasing pressure. Businesses compete to secure a high-profile presence in the area.  As a result, there are growing concerns about traffic pollution, general accessibility and pedestrian safety.  In addition, the opening of the Elizabeth Line, the railway route built by Crossrail, under construction since 2009, due to open in December 2018, is expected to increase footfall and exacerbate existing problems.  All of this has prompted a much needed discussion about the need to reshape the area’s public use.

The Crown Estate appointed a multi-disciplinary team to develop proposals for Oxford Circus and its surrounding streets.  Atkins’ pedestrian modeling team was involved in the proposal’s assessment to accommodate the increased demand, improving accessibility while providing visitors with a world-class experience.

The Solution

Key project challenges were the size of the area to be assessed and the large number of pedestrians involved.  Developing a model capable of simulating existing conditions in such an environment demanded innovative data collection methods supported by traditional survey data. After evaluating several survey technologies, the Atkins team opted for WiFi Survey: monitoring the movement of active WiFi- enabled devices throughout the study area. WiFi survey provides relatively clean samples with high sample rates.  In addition, it enables relatively straightforward long-term comparison between different monitored areas, which would be difficult, if not impossible using traditional sampling methods.

Following their standard approach, the Atkins team developed a base-year model which simulates existing conditions to a reasonable level of validation. The methodology has been successful in the calibration of the base model (R-Squared of above 0.96 and GEH < 5 for most measured flows).  They used this as a basis for the development of future year scenarios for the assessment of design alternatives.  The 2018 demand level has also been increased to develop 2019 models.Oxford Street in VR

All models were rendered in a 3D Virtual Reality environment, so the team could literally experience walking through the base model. This proved to be an incredibly useful aid in refining and visually validating modeling assumptions, thus solving a common problem in the pedestrian modeling sector, as standard software only allows modelers to visualize simulated crowds from a distance. The 3D Virtual Reality environment also enabled exploring spatial improvements with the design team in a seamless and highly effective manner.

The setup

The Atkins team deployed 21 WiFi devices (“nodes”) on building cornices. The nodes were attached to the building’s floodlight mains and used 3G/4G connections to stream data in real-time.  Behind the scenes, Accuware’s WiFi Location Monitor system determined the approximate location of all WiFi devices by triangulating their positions based on their signal strength.Nodes. Installations. Tracing dots.

Weekday profiles support peak-period assumptions

Data collected through WiFi Location Monitor proved essential for monitoring conditions on the ground.Volume of daily movement

One day, August 11th at 9 AM, the number of devices detected peaked and then suddenly dropped. As this trend was very unusual for this time of the day, the team worried that the system was experiencing technical problems. But just then they learned that Oxford Circus station had been evacuated because of a train on fire. With the station temporarily out of service, the overall population in the area was lower than usual. This incident revealed that the data captured will be very useful for planning emergency operations.

O/D matrices for weekdays and weekend

As the datasets were growing week by week, the team began analyzing the data and obtaining useful insights both for modeling and for stakeholder information. For example, the combination of Origin-Destination (O/D) matrices and daily profiles for both weekdays and weekends enabled them to define different areas by showing which were commuter-driven, leisure-driven, and neutral; where neutral is the configuration that maximizes public space usage throughout the week, an important indicator of the quality of public spaces.Origin-Destination (O/D) matrices

Pedestrian speeds on Regent St and Argyll St

The analysis covered sample speeds on Argyll Street, which is a pedestrian lane, and Regent St which is not.  As the speed profiles are similar it is clear that the data was not influenced by vehicular traffic. However, for some devices the detected distance is less than the actual one due to low reporting frequency.  The team expected that the resulting average speed would be overestimated. To eliminate this effect, they did not consider higher (and implausible) values. When the tails were  excluded, the profiles show a normal distribution with an average speed of 1.2 meters/sec.Pedestrian speeds

Rendering pedestrian simulation with Virtual Reality

The simulation also includes vehicular movements obtained from dynamic traffic modeling of the area.VR Oxford Street visualization

Behind the scenes

To implement their solution, the Atkins team deployed Accuware’s WiFi Location Monitor, a system designed to passively detect and locate active WiFi devices, such as cell phones and tablets.

Using WiFi Location Monitor requires deploying WiFi “nodes“, which are WiFi routers set on listening mode. Nodes detect the presence and signal strength of active WiFi devices nearby, uploading their data to a cloud-based server.  The server estimates the location of each WiFi device from the signals collected by multiple nodes.  Each device is identified by its WiFi MAC address. Note that no personally identifiable information is ever collected.  All collected data is truly anonymous.  Actual location of each device is estimated within a 3-meter radius of its actual location. which is ideal for urban use surveys.

For more information about the Oxford project, contact Francesco Angelelli at Atkins.  Please, note that Mr. Angelelli will be speaking about this project at Modelling World 2018.
VR Oxford Street visualization

Posted in Case Study | Tagged , | Comments Off on Oxford Street project: big data and virtual reality in urban design

Deep Dive into Data Visualization

In an article dating from last May, The Economist argued that data was “the world’s most valuable resource.” We could not agree more. Gathering data is not enough. Finding appropriate ways to visualise it and communicate it is just as crucial.

Our partners at Wezit recently attended the Museum Computer Network Conference in Pittsburgh…MCN 2017 in Pittsburgh

… and garnered a few observations from the “Beyond the Graphing Calculator” talk.   Here is what Wezit’s Alexia Casanova shared with us.

We all take information in differently

In her “#musedata is cool” presentation, Angie Judge from DEXIBIT reminded us all that we all like to take in information in different ways. Some people respond better to raw numbers, others prefer visual representations, creative propositions, stories or text. It is essential to adapt to whoever you are trying to communicate this information to. Good data visualization should help your team make informed strategic decisions. When paired with geolocation technology, it gives you a better understanding of how visitors are progressing through space and how you can improve the visiting experience based on their habits.

Data visualization is not dumbing down information

Elizabeth Bollwerk is an Archaeological analyst at the Digital Archaeological Archive of Comparative Slavery (DAACS), an initiative of the Thomas Jefferson Foundation, which happens to share all of the data regarding their archaeological collection on their website. DAACS logo In her presentation, she stressed the need to “[portray] the data meaning accurately and ethically.” She explains that presenting data in a creative way shouldn’t compromise the accuracy of the information that is being shared. Good data visualization should help you understand information quickly but also allow you to make informed strategic decisions.

The future of data visualization is bright

Jeff Steward from the Harvard Art Museums presented a series of extremely creative data visualisations including a representation of the whole museum’s collection in the form of a solar system. Harvard Art Museums“Can data be immersive?” he asked, “what if I could walk through data with virtual reality?” This would certainly be a wonderful way to weave data into a tangible and playful experience.

Jeff Steward's presentation

Jeff Steward: Applying astronomical metaphors to render the collection as a series of solar systems.

Angie Judge described what the future of data visualization looks like for DEXIBIT: machine learning models predicting what tomorrow’s data will be and friendly chat bots answering decision-makers’ questions based on the data collected. Exciting, isn’t it?

About Wezit

Our partner, Wezit, is a software development company that integrates Accuware’s technology to develop complete solutions, such as tour apps for cultural institutions. Our latest collaboration is a geolocated mobile app for the newly reopened Musée d’Arts de Nantes.

Wezit logo

Posted in Business Insight | Tagged , , | Comments Off on Deep Dive into Data Visualization

Practical SLAM for IoT is here

This is about our latest product: a computer vision-based indoor positioning system (IPS) capable of delivering a mobile device’s location with high accuracy without need for infrastructure and minimal site preparation.VPS heatmap

We asked our CEO to walk us through a demonstration of the new product.  He happily agreed.  This is what happened.

Dragonfly’s Capabilities

Accuware Dragonfly (formerly Visual Positioning System) is a SLAM-based system.  In brief: Dragonfly uses a camera to perform Simultaneous Localization And Mapping (SLAM), a technique described as “… the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it “.  The agent in question is an autonomous mobile device, most likely a robot that moves through physical space carrying a camera hooked up to a computer.

A minimal site setup is required:  at least 3 markers must be placed throughout the site.  They encode specific location coordinates, and are used by Dragonfly to initialize its position. Markers are easily generated on a regular printer.

Dragonfly Marker

Dragonfly was designed to be thrifty in its use of available resources.  As expected, the device’s camera is the main drain of battery power, while the computational footprint, available as an SDK, has been optimized to run efficiently on a regular smartphone, and is available on other platforms.

Dragonfly uses the device’s camera to analyze its surroundings, map the environment around it, and determine its location in reference to its physical context. This data can be shared with an external system (ex. a server) to track the device’s location over time.

Putting the demo together

Our first challenge was finding a suitable autonomous mobile robot fitted with computer and camera, to run the demo.  VPS robot setupAfter some deliberation, we realized that our faithful office iRobot is an autonomous mobile platform, while a regular smartphone could provide the requisite computational power plus video camera.  A smartphone car bracket mounted on top of the iRobot completed the setup for an Android Nexus X5.VPS installing the smartphone

We installed the test mobile app on the Nexus X5.  It integrates with the Dragonfly 2.0 SDK and uses available WiFi and/or 3G or LTE to upload data to our cloud-based server, whose dashboard displays the phone’s real-time location on the floor plan we uploaded.

The advantage of this setup is that it can be easily replicated by anyone interested in testing this product to learn how it behaves, with the convenience of using everyday items.  Note also that household robots like the iRobot are designed to discover and avoid obstacles in the physical environment where they move.  This matches the practical use cases we envision.VPS demo robot

Here is the demonstration

The video gives us the highlights of this experience.  Mount the smartphone on the robot and start it.  As the robot moves about, Dragonfly maps the environment while reporting the sequence of locations reached. The server displays these on the dashboard as a moving dot.

Note the spaghetti diagram depicting the robot’s entire trajectory:

VPS spaghetti diagram

And the heatmap, which highlights the physical areas covered by the robot.  Both diagrams built using location data collected through the test. Dragonfly Heatmap

Applications Envisioned

What applications do we envision for this product?  Here’s a short list:

  • Guiding pick-and-place autonomous mobile robots in warehousesAutonomous mobile robot
  • Mobile industrial robots in “Smart Manufacturing” environments
  • Roaming security robots patrolling buildings, parking lots and garages

In other words, physical location, as provided by Dragonfly 2.0 can be a key ingredient for IoT applications in industrial settings.  It is the “location sensor” in environments variously referred to as: “Industry 4.0”, “Industrial Internet”, “Connected Enterprise”, “Smart Manufacturing”, “Smart Factory”, “Manufacturing 4.0”, “Internet of Everything” or “IoT for Manufacturing”.

Rounding it up

Accuware is a technology-driven company.  It is in our DNA.  We value technology with practical uses in mind.  We develop products that leverage different technologies, all focused on providing physical location in the real world.

Dragonfly is today’s accomplishment.  The R&D effort took many months.  The product launch follows the model from previous releases.  Provide interested parties with a chance to evaluate the new product in their environment.  Our Tech Support group will assist committed users to ensure their success.  Our Partners network will come up to speed on the new release.  Case studies will follow.

We are thrilled by the possibilities of this product.

Do you want to try this system in your environment?

VPS CEO introduction

Incidentally, the heatmap helped us visualize the iRobot’s coverage, so we redirected it until the office had been thoroughly swept clean and looked pristine.  We have the data to prove it.   Now, that’s quality control. 

Posted in Demonstration | Tagged , , , | Comments Off on Practical SLAM for IoT is here