When Touch Analytics Meets Anonymous Video Analytics
IntuiLab (the world leader in touch-first experience creation without coding) and Quividi (the world leader in automated audience measurement for digital signage) have recently announced the integration of their products.
The IntuiFace platform enables businesses to create engaging, highly interactive experiences without coding that can be deployed within minutes on any touch-enabled display. These touch-first experiences can incorporate a wide variety of media formats, an open connection to cloud-based data and APIs (including the Internet of Things), a broad range of expressive capability, a powerful trigger/action (aka IF This Then That) mechanism and all of the core capabilities necessary for successful signage deployments, including analytics. For more information, see www.intuilab.com.
Quividi provides a real-time description of the people appearing in the field of view of a camera that would be placed on top of a screen. Such information as the position and distance to screen of each viewer, whether s/he’s currently looking or not, his gender and his age bracket are provided 10 times per second for every single person whose face is detected. Note that people are not recognized, but detected and qualified – this is why the technology at stake here is called Anonymous Video Analytics. Simultaneously, a summary of each viewing session is uploaded in the cloud every 30 minutes, for statistical analysis.
The integration of these two solutions enables the creation of smart adaptive and data rich digital signs. Quividi’s video analytics produces a host of real-time presence and demographic information, all of which can be used to dynamically adapt the signage thanks to IntuiFace’s trigger/action mechanism. And with the correlation of video analytics with usage statistics generated by the on-screen activities of visitors, retailers gain actionable, data-based insight to the effectiveness of their content.
The combination is opening new avenues for interactive apps and more engaging experiences for the user. Let’s review here some of the scenarios that are now made possible and let’s go through this review following the 3 stages of a user experience with an interactive application.
Using video analytics before someone starts touching the screen
- Pre-select a catalog based on the demographic profile
When a person approaches the screen, you can use his gender and age to filter through a catalog and start presenting a selection that is tailored for him. Personalized experiences have been proven to significantly increase the engagement of the audience.
- Shy users? Get them to approach!
If you detect that a person has been looking to the screen and not moved much for, say, 10 seconds, you would want to display a reinforcement message to get him to come up and touch the screen.
- Change the tone of the speech when a kid is around.
If a child is amongst the viewers at any moment in time, you might want to change the wording, or even the graphics to appeal to that young audience. If you detect an adult female and/or male around, you might also assume that’s a family.
- Trigger content when someone comes from a certain position
If a person is coming from the right, then you might infer that he’s already been in a certain part of the venue, so you may want to display a message different than if he had come from the left.
Using video analytics during an interactive session
- Improve the photobooth experience
Taking pictures of a face that can then be enhanced with add-on attributes (hat, moustache…), or pasted into postcards, you’d generally want the user to be in a certain position materialized with a face silhouette on the screen or some stickers on the ground. With the Quividi solution running on a second camera, you’ll be having the face coordinates (XY and Z) for every single face, so you could start your animation at any moment without going through this cumbersome positioning pre-stage.
- Get the users to do something special
To immerge an audience into your app, or just for the fun, you may want to entice viewers to do something special: move their head a certain way, step back, bring 3 people side by side…
Hint: Quividi will soon support emotions (eg detecting a smile or a frown) – so there will be even more behaviors to follow!
- Reward those who look long
Each person’s presence and attention time is provided. Knowing that someone has been around (and interacting) for more than 30 seconds could, for instance, be used to reward a person with some information or coupon.
Using video analytics after a session
- Calculate the engagement funnel
With both the face statistics and the interaction statistics available through the Intuiface Analytics platform, you’ll have a rich data set to count conversions and analyze the real impact of your application. You will know:
– the number of passers-by (= total universe),
– the number of people having glanced at the screen (=impressions),
– the number of those having looked at least X seconds (=signs of interest)
– the number of those having touched the screen (=engagements)
– the number of those having completed the navigation up to a certain point (=completions)
- Calculate the total engagement time, broken down by stage
Having the presence time from the first glance will let you put durations on the pre-touch stage (ie how much time does it take before someone touches the screen) and on each stage thereafter (how much time on the opening screen, on the second one, etc.)
- Analyze the dominant profile with some typical navigations or spaces within the app
If you wonder which demographic profile most visits a certain page, or spends the most time with it, you’ll be able to first identify the time moments for the wanted space, then look to what was the profile of the person close to the screen at that moment in time, then do a pie chart to calculate the dominant demographics.
- Analyze the preferred spaces by demographics
Inversely, if you want to study the path through your application, you might want to look for all viewing sessions longer than 10 seconds for young adult males and see which touch activity was performed by these people, then do it again for adult females, etc.
Can you think of other applications? We’d love to hear from you!