As I just mentioned to a friend – I seem to be waxing philosophical this past weekend as I complete reading “Wizard: The Life and Times of Nikola Tesla, Biography of a Genius,” by Marc J Seifer. I would not have tripped over this and several other excellent texts had I not started returning to brick and mortar bookstores this year. Amazingly, companies such as Barnes and Noble seem to draw quite the crowd in this area which, for someone with several walls of books in his office and home, is a luxury – having enjoyed one weekday morning reviewing several books on programming with a coffee in the coffee shop paging through potential purchases.
One of the interesting things about this biography that is different than the dozen or so Tesla biographies I’ve read is that Seifer spends a good part of his time reviewing the motivations of those around Tesla and a little bit more on his mental state at each time. For instance, a few people discuss that he was somehow unstable by saying he was communicating with Mars. However, at the time, it was scientific consensus that life existed on other planets in our solar system, in particular Mars – which is one of the reasons why the original radio show ‘War of the Worlds’ caused such a panic. It was also interesting to see that he was embroiled in constant patent battles related to those who copied many of his concepts and the amount of industrial espionage that occurred. This, of course, led me to find the History Channel series on Tesla that included the biographer – it was horrible, yet drew a few more people to read up on Tesla. (I can only think that the editing was terrible deliberately to generate that ‘conspiracy theory’ concept that is popular on History these days).
In any case, it made me consider that even in the 1800s and early 1900s, things were not that much different. People made things far more complex than they needed to be. So, after writing this month’s newsletter yesterday (see the newsletter and editorial here: https://conta.cc/3EqKIwC), I had one of those stare-at-the-ceiling-all-night stretches meditating on how complex we make the simplest things.
Take, for instance, the direction observed in the detection of mechanical issues in rotating equipment. For predictive maintenance we spend more time looking deeper and deeper into narrower and narrower areas, even selecting additional technologies to dig even further even when we know decision-makers won’t do anything with that data. We end up buried in information that is useless and completely miss the obvious issues that show up in a simple change in temperature at a machine. Instead we are distracted by a minute impulse that might be a condition that has limited to no impact on the reliability of the equipment.
Even with ESA/MCSA – for decades the focus has been on the rotor of induction motors pushed hard by people who lack understanding in the original purpose of the technology. So much so that it appears that it has made for easy PhD dissertations across the globe, with only a few exploring the power of using the electric machine magnetic airgap as the sensor. This is so bad that major corporations have not been able to replicate what we have done with wind turbine ESA because, I believe, they have limited understanding of both the technology they are analyzing (machinery design) and the technology they are using (ESA). At best there are different silos working on it. The time, even there, has been to make things complicated instead of keeping it simple. Others that have made the attempt have been like Tesla thinking that when he picked up Marconi’s signals from Europe at his lab, because no one else should have been using Tesla coils for wireless transmission of information (Marconi pirated Tesla’s patents on wireless through ground because his patents using Hertzian waves didn’t work), as signals from Mars; they thought they saw something because incidentally a specific load did something with power harmonics from the controls of the turbine and a fault, they did not actually detect, existed. (sorry for the long sentence).
There is a separation that is necessary between research involving a narrow view of failing components in order to understand the failure process and applying that into predictive maintenance. In the series I just completed in https://theramreview.com on the use of basic motor information as machine learning sensors, we explore the idea that you can use raw data to detect faults in lesser-critical equipment. A project I’ve been working on for a client resulted in this direction, in which I was looking into others that had done something this simple with machine learning, and a few had published about it, but no-one that we could find had actually done the work. That’s right, a simple approach to machine learning had been overlooked that should have been the first to be explored. I had performed a talk on this for Mobius Connect in January, 2021, and had a large audience of data scientists and they could not comprehend that I was not worried about detailed fault classification (ie: part of a bearing vs general bearing) and more concerned with Remaining Useful Life. They had been trained to worry about the deep detail of an individual component versus the global condition of the machine, supply power, and driven equipment.
When we’ve been working on the EMPATH, ECMS and ECMS-1 software and hardware, as well, it has been a focus on a system’s approach. Yes, the details are there if needed for the analyst – it’s actually a by-product of the methodology, but we are more concerned with system health. In effect, some conditions will be detected later – such as stage 3 or 4 with occasional stage 1 or 2 bearing faults – but will have a high confidence level for detection measured in weeks and months versus months or years. However, as we know, those defects are primarily fuses for other system issues which are detected well in advance of the bearing defect. Many times the problem can be mitigated allowing the technician or engineer to correct the issue without the early degradation of the bearing resulting in a far more intrusive and timely repair. After all, what is the difference in time to remove soft-foot and align a motor versus removal and installation or in-place bearing repair?
With all of the advanced technology we are producing today, and the ability for research-level technology to be deployed in the field, many are focused more on the research level reporting rather than the condition and health of the systems. What is the impact of performing less than precision alignment on the system including energy costs and greenhouse gas emissions (GHG)? I remember having this disagreement with a customer (“why do a Cadillac repair when we are going to do a Yugo installation?”) in which I used a different version of the EMPATH system to show the losses associated with a Yugo repair and Yugo installation (sorry my friends from former Yugoslavia, those were the terms used in the convo – be assured the wife will have a discussion with me after I’ve published this). The reliability of a quality repair in a poor installation was far, far better than a poor repair in a poor installation: common sense. Now, performing a ‘Cadillac’ repair on a ‘Cadillac’ installation, we also proved that the chances that they would have a repeat of the same condition during the period that the maintenance manager was employed at that company was virtually nil. That is one of the powers of taking a systems approach. This also meant that we had the ability to demonstrate the energy and GHG improvements for taking a step that would reduce the perceived expense of maintenance and a resulting ‘simple payback’ measured in immediate to days (I’ll cover my thoughts on the laziness of ‘simple payback’ in a later editorial).
The lesson I am rambling on about in this short essay is to keep things simple. Unless you are doing research into details as to why that specific piece of equipment is failing, do you require a repetitive detailed analysis on specific components in the machine? Or, for the purpose of predictive maintenance, do you monitor conditions and investigate once implied degradation exists? Do you need to study that bearing on that machine to detect impending failure, or do you monitor the complete system using a single technology? Do you really need that much information? I’ve seen cases where massive amounts of information were brought to the table, but ESA, vibration, and temperature were used to make the decisions – the very things that were what triggered notification of the defects. The other information had it’s place, but not as part of the monitoring strategy.
Food for thought.
You must be logged in to post a comment.