What if …? Dr. Petra Krahwinkler

In this special interview format, we ask an expert from Primetals Technologies to use their imagination.

Dr. Petra Krahwinkler employs artificial intelligence to take the process automation solutions of Primetals Technologies to the next level. She holds a Ph.D. with RWTH Aachen University, Germany, and remains unafraid even of the much-dreaded artificial general intelligence (AGI) and its supposed inclination to rule the world. Metals Magazine has asked the fearless woman to be imaginative and give us her personal perspective on what if …

Dr. Petra Krahwinkler

… current artificial intelligence models suddenly turned into an artificial general intelligence?

Dr. Petra Krahwinkler: It seems to me that an artificial general intelligence (AGI) is still far from being realized. I am certainly not as alarmed as certain people who have commented on the matter in the media. Today’s artificial intelligence models are highly specialized—even if they appear to be extremely versatile and powerful on the surface. This is true even for large language models like ChatGPT, which use large amounts of text-based data to generate new material of the same nature. Other tools are optimized for coding or music creation, and the same principle applies. I recently spoke with a musician about AI-generated compositions, and he said that they sounded generic and had no innovative quality to them. Most people talking about an AGI have the idea that it could eventually take over and rule the world, but I am sure that much more planning and strategizing capability is required to accomplish that, compared to what we have in the AI space today. Having said that, if an AGI really existed, I would worry about the errors it would likely produce. I would expect it to make the same mistakes we are seeing with smaller models—for instance, when they make false predictions or hallucinate results.

… the autonomous steel plant became a reality? would human operators still have jobs in the industry?

Krahwinkler: Yes, absolutely. AI models are limited because they can only be trained on preexisting data. For new steel plants, this data is unavailable. Theoretically, it may be possible to use “transfer learning,” which involves moving what you’ve learned at one plant to another facility. But this could prove problematic for both technological and legal reasons. These limitations of AI suggest that you will need skilled operators to run the plant for a long enough time to accumulate large amounts of data. However, even plants that have operated for a while will occasionally run into problems—which means that some of the production units are deviating from what AI-based models could consider a “normal” state. What then? In situations like that, it is essential to have well-trained staff present. As long as everything works fine, you may not need them, but you can not eliminate the unexpected completely. Whether you are looking at a blast furnace or an electric arc furnace, to name two examples, you will inevitably be dealing with unknowns. The input material could exhibit unusual characteristics—in grain size or scrap composition—and AI, as we currently know it, would struggle to deal with events like that.

Today’s artificial intelligence models are highly specialized—even if they appear to be extremely versatile on the surface.”

… artificial intelligence became more tightly regulated? Would
certain solutions cease to exist?

Krahwinkler: The AI models we implement in steel production today are all highly specialized, and I don’t believe they need to be regulated. Should, however, comprehensive regulation come into force in certain regions, I would not expect them to take issue with what we are doing. The effects of regulation would likely be more strongly felt in university research labs—or by the world’s “crazy millionaires,” who may end up taking their AI development overseas, perhaps to some remote island… To be somewhat more serious, I do think that it is a good idea to enforce the watermarking of AI-generated content, if primarily for educational purposes: I find it constantly surprising how many people are oblivious to the capabilities of modern AI when it comes to the spreading of disinformation. It is easy to think, “I would not fall for that, I can tell what is real and what is not.” But it is relatively easy now to create video material showing a person saying something or doing certain things, and everything is completely made up. The results are very convincing. A challenge connected to watermarking will be to draw the line between content generated by AI versus with AI. Ultimately, AI-based solutions are one set of tools among many.

… you were tasked with identifying where in the steel production process ai can contribute the most?

Krahwinkler: I would first break down the larger steel-production process into distinct production units per production step. Then, I would have detail-oriented meetings with our company’s many technological experts to develop an understanding of what AI could contribute in the context of what a given production unit has to achieve. A core criterion in sketching out the possibilities would be the question of reference data. The more reference data exists for a unit, the higher the chances will be that AI can positively impact production. There is one exception to this rule, namely the detection of anomalies. My expectation is that many control systems that traditionally rely on the reactive adjustment of parameters could be transitioned to a proactive approach. This means that parameters could be fine-tuned during production—rather than later, when the production step has been completed and you examine the result. Continuous processes are ideally controlled by making only very small changes to the parameters, and the sooner you are able to introduce a change, the better. A proactive approach tends to lead to even greater process stability and more consistent intermediate products.

I think that it is a good idea to enforce the watermarking of AI-generated content, if primarily for educational purposes. It is easy to use AI to spread disinformation.”

… you had to explain the difference between machine learning, Neural networks, and deep learning?

Krahwinkler: I should first say that “machine learning” is an umbrella term that includes the subcategory “neural networks,” of which “deep learning” is yet another subgroup. The three therefore should not be used synonymously. Machine learning was pioneered in 1959 and was understood to encapsulate the nascent “self-learning” capability of computers. Simply put, it involves algorithms that use existing data sets to derive a conclusion. Neural networks consist of an input layer, an output layer, and a structure of nodes in between. In a sense, they attempt to mimic the human—or animal—brain. The strength of the signal at each connection is determined by a weight, which is adjusted at the training stage. Neural networks have a wide range of applications, and every ten years or so the term becomes trendy again. Deep learning shares the same core principles but adds complexity by introducing a larger number of layers, which can be variable in size. It can get quite tricky to properly connect one layer to the next, especially when their sizes are different. As a general rule, deep learning requires larger sets of training data than neural networks, but the goal is always to extract correlations from raw measurements.

… brain implants—such as Elon Musk’s neuralink—became standard? Would humanity still be the same?

Krahwinkler: Brain implants have existed for a while, but companies like Neuralink have generated additional interest of late. The topic is—as you may have guessed—unconnected to my job at Primetals Technologies, but I do know that neurostimulators have been used for over two decades to support patients who suffer from Parkinson’s disease. There are other medical uses, as well, treating disorders like epilepsy or depression. I think these are all fabulous use cases that can make a person’s life qualitatively better. The question is what next-generation implants will achieve in terms of connecting one’s brain to a larger digital infrastructure. Will I only have to think “Lights off!” to flick the switch? The idea of invisibly retrieving endless amounts of information from the Internet is also enticing. But is it realistic? Personally, I would be concerned about how you could still put yourself on a “digital detox” program, even just for the night. What happens with the implant when you dream? Would it connect to emergency services when you’re having a nightmare? And would there be the equivalent of a wake word, such as “Hey Siri?” I think that it will still take a while until we see cyborgs walk among us.