One of the things our clients have been asking for is automation tools that help make onsite search better. This is a great idea. For example, there’s no reason that search success metrics can’t be embedded in algorithms that A/B test improvements and automatically kick in if the testing demonstrates improvements. But automation alone isn’t the answer. Humans need to be in the loop a critical parts of the process to ensure that the automation is achieving the right goals. The greatest myth of search automation is that is eliminates the need for people to be involved.
At the beginning
Knowing your starting point, the baseline in measurement terms, is the only way that meaningful goals can be set. So, understanding the measurements, the process, and to a lesser extent, the technology involved helps the business to set the right goals. Sometimes those goals can be achieved largely through technology, largely through advanced analytics driving technology automation, but often there are process and people issues that also need to be addressed. The machine can’t do these tasks. Yet.
Human in the Loop
Increasingly, we’re shifting from rules-based engines to machine learning based tools for improvement automation. We take all the data we gather on search success plus a bunch of other data we gather together with some that the client provides and we can crunch that through a semi-supervised machine learning model to carve out even more insights for automation.
However, we’ve found that many of the use cases require either a supervised or semi-supervised model that requires some human interaction during the training of the model.
The human has to validate the desired outcome and help with the tuning of the inputs. Yes, you can get great automation using the contemporary AI tools but it doesn’t absolve the business person from a role in the process.
At the end
There’s really no end to improvement of a customer experience. However, any good improvement process has feedback loops. In an automated system, the machine manages the feedback loop. In a rules-based system it’s constantly assessing the current state against the rule and making changes. In a machine learning system, the machine is constantly assessing against new data to determine if the algorithm can be improved and then applies those changes.
But there’s one important feedback loop for humans to manage and that’s the assessment of whether the business objective is being achieved. The machines can help drive towards a goal that’s defined, but what if, after a period of time, the business team realizes that the goal has changed. There needs to be a part of the process where the automation metrics that are being delivered are assessed against the business objectives. Perhaps there’s a new or modified goal and we have to start at the beginning of the process to assess people, process, tech and content to determine how to improve the overall process.
Automation is an excellent product feature. We’re going to deliver more and more of that — you can check out our technology solutions here. However, automation doesn’t allow you to outsource the thinking. We’re still going to need humans who understand their business and can provide the right direction for the people and the tools.