Categories
mobile technology

Google announces Style Match discoverability feature for Google Lens

Google Lens Style Match
Google Lens Style Match

Google has announced Style Match, a feature allowing users to point their smartphone’s camera at an outfit so it generates suggestions on what to buy online.

The feature is part of Google Lens, the company’s camera-powered search engine that was unveiled last year and is now fully integrated into its smartphone’s native camera.

The news, which was announced at the Google I/O 2018 conference in California on May 8, means any user can discover a specific item or receive suggestions to similar styles not only for fashion, but other categories including accessories and furniture.

Such functionality could give the tech giant the lead in facilitating discovery through mixed realities, particularly because by being embedded in the phone’s native camera, it doesn’t require the user to learn a new behavior or download a dedicated app they will eventually ditch. So far, brands such as eBay and ASOS have tinkered with image recognition within their own apps, but the ability to trigger the image via a smartphone’s main image-capturing tool can only lead to mass adoption.

Other new features of the Android phone include a smart text selection where users can ‘copy and paste’ texts from the real world, such as recipes and articles, directly onto their smartphone screens. Google Lens then allows users to highlight copy or bring up relevant information, such as the word for a dish at a restaurant menu, and receiving an image of said food.

In order to enable this, Google is leveraging its knowledge in search to help recognize word terms and the context of certain words.

Meanwhile, for the updated Google Maps, users can trigger augmented reality to navigate via Street View – this means the user can wave a phone in front of themselves to know their exact position on the map, and giant arrows will point to where they should walk next.

Beyond augmented reality and image recognition, the company also announced developments to Google Assistant that means users can increasingly have more natural interactions with voice. During the keynote, they used singer John Legend’s voice to demonstrate.

This includes the ability to have “Continued Conversations”, where the Assistant remembers your most recent questions, as well as being able to ask more than one question within the same voice request. The future Assistant will also be able to call places on the user’s behalf, which proves particularly handy when booking appointments via phone with businesses that don’t have an online booking system.

Categories
data technology

Your future in-store loyalty program will be fed by facial recognition

Lolli & Pops is using facial recognition
Lolli & Pops is using facial recognition

Imagine this: You walk into your favorite store and the sales associate welcomes you by name. She or he lets you go about your business, but on-demand shares with you which of their latest products you would most likely be interested in.

Such recommendations, powered by artificial intelligence, are a very familiar experience online these days, but they’re also increasingly being worked towards in the brick and mortar retail world.

A multitude of different technologies lie at the heart of achieving this, but namely it’s a connection between CRM and machine learning, all with that layer of identification placed on top to deliver results for the specific customer in question.

Your mobile device usually plays a key role in making the ID part possible, but facial recognition is another such way.

Lolli & Pops, a candy store based in the US with roughly 50 doors, is one such retailer experimenting with this. A proof of concept called Mobica, which is powered by Intel, was on show at NRF’s Big Show in New York this week. Using computer vision, it’s a facial recognition loyalty scheme designed to drive VIP consumer engagement.

The opt-in experience (shoppers literally have to enrol their face to be a part of it), means anyone entering the store is recognized in real-time by an app the sales associates are using on their tablet devices. From there, they are able to tell the individual’s taste profile, know for instance if they’re allergic to peanuts, and be able to personally recommend great products to them via AI-enhanced analytics accordingly.

“It’s designed for their loyalty shopper, so about wanting to make them feel really special,” Stacey Shulman, Intel’s chief innovation officer for its Retail Solutions Division, told me. “Privacy isn’t an issue because they have such a strong relationship with their customers and are trusted by them already. It all starts with service and a connection to the customer.”

You can easily imagine the same VIP concept being applied at the likes of Sephora for beauty, or even in an apparel merchant.

Other facial recognition technology on show at NRF enabled special, personalized deals to surface on screens in real-time, demonstrated a restaurant that allows customers to pay by face, and also touted broader data collection opportunities around demographics and store-traffic patterns.

It was the customer service piece that felt particularly pertinent however. As Shulman explained: “Technology today needs to not be at the forefront. It needs to be the helper at the back. When done right, it enables people to get back to the customer and back to what’s important. That’s what we see here; it’s not about the facial recognition or the AI, it’s about the experience the customer then has. The differentiator between a brick and mortar store and Amazon today is customer service. We can’t compete on price and selection anymore, so we have to go back to service. If we don’t we will have a problem.”

The Lolli & Pops facial recognition initiative will roll out to stores in the coming weeks, according to Shulman.

 

Categories
data e-commerce Editor's pick mobile technology

ASOS launches visual search tool to aid inspiration and discovery for shoppers

ASOS visual search
ASOS visual search

My filter for successful visual search is simple – can you take a photo of someone else’s shoes or jacket when on a busy train and find a direct replica online? Can technology negate the awkwardness of actually speaking to someone during your commute to find out where his or her “must-have” item is from?

Fashion stalker claims aside, the short answer is still no. In the majority of cases, the tech is not yet good enough to pull apart a busy image and identify exactly what that piece is.

It is however getting better at finding similar items. Thanks to artificial intelligence, it can identify shape, colour, print and more – it can serve up relevant options and at least start to inspire discovery.

That’s the key theme behind the launch of e-commerce site ASOS’s visual search launch on its native app today.

This is a fashion website with a huge 85,000 products on it; 5,000 new ones every week. One of many challenges in the online retail space is balancing that newness with the overwhelming nature of volume, particularly for users increasingly browsing on mobile. It’s for that same reason we’ve also seen Pinterest and eBay recently playing in this computer vision space. It’s about that keyword: “discovery”.

This rollout from ASOS then, aims to enable shoppers to capture fleeting moments – whether that’s someone they pass on the street, a look a friend is wearing or even a screengrab from Instagram or otherwise – and use them to search through the site’s product lines to find similar suggestions.

“The depth of our offering is absolutely one of our strengths. However that range can be challenging to present to customers, especially on a mobile phone,” Richard Jones, head of product and UX at ASOS, explains to me. “If you know what you want, you can quite simply get to what you’re looking for. But what we’re trying to find is more of that discovery use case – if you’re not quite sure what you want, or you’ve seen something that’s inspired you, visual search is designed to kickstart that discovery… It’s about getting as close as possible to giving you something that is visually similar.”

The tool is shown as a camera icon in the search bar of the ASOS app. Tapping on it then invites customers to either take a picture or upload one from their library to have it find similar products.

Jones knows the tech isn’t yet perfect, if anything the examples out in the market to date have been a “bit clunky”, but with machine learning and big data, it’s only going to improve, he suggests.

ASOS’s own version, the tech for which is powered by an external third party the company has opted not to disclose, is built on this notion. “The more [this tech] gets used, the better it gets trained, the data results get better… the smarter it becomes,” he explains.

That also reflects the way the ASOS team are operating – pushing the launch out to market (in the UK only at first) in order to test and iterate accordingly. It’s about getting it out there and learning how it’s best used before then rolling it to different geographies thereafter.

In its press release, ASOS refers to this as the “build-measure-learn” approach to innovation, a methodology developed by the Lean Startup.

This announcement also supports wider planned technology investment by the company. It currently has a tech team of 900 employees and is planning to hire a further 200 over the next year, for instance. It says its focusing on its AI-powered recommendation engine, which uses big data and a smart algorithm to learn customers’ preferences over time, as well as on improving site infrastructure to drive agility and speed up innovations for customers.

Zooming in on the mobile experience is particularly key. Today 80% of UK traffic for ASOS and nearly 70% of orders come from a mobile device, with people spending 80 minutes per month, on average, in the ASOS app.

With such mobile-native customers, Jones says it’s about how to now use the underlying technology that is in these devices – the high processing power, the ultra high-definition cameras, the depth perception imagery and more.

“We’re thinking about how do we use these devices in a way that is natural and contextual to how our 20-something customers live their lives. They go everywhere with [their smartphones] – so how can we make sure we give them an experience they are expecting?” he comments.

Further motivation lies in the fact using the camera as a means to search is going to become fairly default in September when Apple launches iOS 11, which includes the ARKit development platform. That essentially means all manner of augmented reality uses will be possible directly through the iPhone’s camera lens; visual search included. Net-a-Porter is another e-commerce player that has referenced using it.

“What we want to do is be able to meet that customer expectation and demand,” Jones adds. The visual search tool will live within the app for now, with the intention of making that broader experience an increasingly personalised one for each shopper down the road.

ASOS’s visual search launches on iOS in the UK today with pending rollout for Android and then international scale thereafter.

This post first appeared on Forbes