Comtrade Digital Services, like many others in software development and quality assurance, has committed to investing in Artificial Intelligence (AI) as a part of its growth strategy to serve its clients’ future needs. As a recognised thought leader in digital transformation, Comtrade Digital Services invited a number of experts from their respective industries to share their views on this fast-evolving area at its Quest for Quality Conference in Dublin, Ireland.
Considering that a previous report from market research firm Tractica forecasted that the annual global revenue for AI products and services will grow from $643.7 million in 2016 to $36.8 billion by 2025, it’s safe to say that this way of working signifies the next big technological shift and will become commonplace across industries such as financial services, healthcare and advertising.
In fact, the general consensus is that most companies have already started implementing an AI programme but have seen mixed results. However, some organizations are seeing material success. Take CERN for example, it’s the European Organisation for Nuclear Research and one of the world’s largest centres for scientific research.
Currently, it is utilising an AI programme to analyse, record and process around 600 petabytes of data coming from large detectors in the Large Hadron Collider (LHC). Moreover, it makes this information publication-ready.
Chief Research Officer at CERN, Fons Rademakers revealed: “At CERN, we have our own systems that we benchmark industry products against. We continue to use and keep improving many of our own algorithms, but this method keeps both parties honest and leads to progress.”
One area of concern noted by the panel of experts was that of Quality Assurance (QA). While QA is the backbone of reliability when implementing any new product, technique or system, it doesn’t truly exist in the AI engineering process. This is an issue that requires close attention and customisation of traditional processes and practices.
The fact that a system is sophisticated and machine intelligent doesn’t necessarily mean that it can be trusted or should be left unmonitored. Developing a way of testing AI systems is thus imperative if they are to become a reliable resource for companies across all industries. This is made more complicated by the fact that such learning systems evolve over time, meaning that the output for a given input will not always be the same.
Vincent Lonij of IBM Research Ireland explained: “It’s important to figure out how you can test if the way it’s behaving is the way it’s supposed to be behaving. The system from a statistical perspective should still reach certain benchmarks. Rather than one input, give it a million inputs and make sure that, on average, it reaches all the benchmarks you know it’s supposed to be able to reach.”
According to Dr Robert Ross of the Dublin Institute of Technology, “It’s about creating conditions whereby we can test, validate and investigate the algorithm, like we would any other software component.”
Digital Services strategist Albert Eng commented: “It should be obvious that, even with the advancement of AI technology and engineering, QA is not guaranteed, and when AI-based techniques are introduced from the research and development component of an organisation to the mainstream part of it, new practices have to be applied to ensure that it is performing adequately and accurately. In fact, AI and machine learning has complicated the QA process greatly.”
Another topic of discussion within the area of AI is Big Data, specifically the quality and impact of it. Companies are working with real data, sometimes structured and spread across different silos within the organisation. Not only is there the challenge of figuring out what the data shows but also the value of it, what the companies are looking for and how it can be used.
Big Data further complicates the QA process, as does the acquisition of new realms of unstructured information across the internet. As Eng challenged our distinguished panellists, all of them agreed that the enterprise data warehouse still has value in the totality of Big Data, but it becomes a less important source of information for new correlations that AI and machine learning systems will discover.
For now, AI and machine learning are benefiting certain companies and industries to some degree; enabling more refined and efficient ways of working. Furthermore, there are hopes of greater progress in the areas of autonomous vehicles and intelligent personal assistants. The possibilities, of course, don’t end there with talk of human brain interfaces and more sophisticated modes of interaction with computer systems.
However, there is still a great deal of research and work to be done. Not only is it currently impossible to pick which platforms are doing it best, but a lot of the major hurdles associated with AI will need to be resolved. This will have to happen before AI becomes an integral, accurate and reliable part of life. But the journey into AI has to start now.
Article written by Viktor Kovacevic, Vice President and General Manager, Comtrade Digital Services.
Prepared and edited by Arthur Velker.