Transform Manifesto Principle #6: AI Can be Used Intelligently but Responsibly to Derive Insights from Engineering Data

by | Apr 30, 2024 | AI and Semantics, Digital Transformation, Model-Based Enterprise

AI can be used to derive insights and intelligence from unstructured text. But AI must be used responsibly to ensure high accuracy and transparent sources.

This is the sixth article in a series about the principles of the Transform Manifesto. If you want to start at the beginning, go here.

In high-stakes industries like aerospace, automotive, medical devices, oil/gas and others, the use of artificial intelligence (AI) to derive insights from unstructured data can be hugely beneficial, but also demands accuracy and transparency. If AI generates false information (aka “hallucinates”) for an article about Taylor Swift, you’ll definitely hear about it from my daughter, but the consequences are nothing in comparison to generating inaccurate requirements about an aircraft fastener.

One notable use of AI is for the scalable transformation of static engineering documents into contextual digital models. When XSB pioneered this new paradigm in 2014 with SWISS (Semantic Web for Interoperable Specs and Standards), we knew right away that the traditional manual modeling of semantic information is impractical for the vast number of documents and exponentially more data points contained in those documents; it would take an army of humans many lifetimes to catalog and tag every piece of data. Furthermore, maintaining consistent judgments and tagging across diverse organizations is unrealistic. AI offers a scalable solution using an ensemble of tools from ontologies to large language models to effectively model syntactic and semantic information from diverse engineering sources. SWISS Semantic AI is capable of characterizing and contextualizing information at a scale unattainable through manual means.

For example, a document containing dozens of finishing requirements under various conditions can be summarized and classified by AI according to those conditions, the types of finishes, or other selectable criteria. A user seeking finishing requirements for a specific material and use case can simply click (or ask!) and obtain the requirements they need rather than wading through the entire document.

 

Many companies make wild claims about their AI. Make them prove it! Accuracy and consistency must be tested and validated on an ongoing basis and at scale.

One risk of AI in engineering industries is the number of charlatans that claim to use AI, or worse, use it dangerously. Many companies make wild claims about their AI (“500 years of combined AI experience”?! Ummm, no.) Accuracy and consistency must be tested and validated on an ongoing basis and at scale. A simple demonstration across a small data set doesn’t prove anything (and may very well contain some Wizard of Oz sleight of hand). When the stakes are life or death, don’t hesitate to use statistical quality control to ensure that derivations, insights, and extracted values are accurate.

As AI integrates further into engineering organizations, the threat of cyberattacks looms and can corrupt AI-driven decision-making, disrupt production, or leak sensitive information. As with all IT systems, companies should maintain current best practices for security and include AI decision-making in its security infrastructure.

AI is a scalable enabler of insights that can make humans more efficient and help companies reduce cost, time, and errors. But we must recognize AI’s limits and apply it carefully with clear validation – so you can trust the results 100% of the time.

What do you think? Does AI have a place in high-stakes engineering work? If your answer is yes, but you want low-risk, high-accuracy, transparent AI to do the work, let’s talk.

If you want to read all seven principles of the Transform Manifesto, start here.